This page describes a file format and is intended for reference and reverse engineering.

This page describes a file format in Smash Hit, going into many technical details. This page is not a tutorial about how to mod, but rather information that is useful for making tools that can be used to modify Smash Hit.

Smash_Hit_mesh_baking_demo

Smash Hit mesh baking demo

An example of mesh files that have been generated from and segment and then loaded into Smash Hit.

In Smash Hit, segments have two files, one of which is a LitMesh (extension .mesh) file. These files only store box vertex data and visuals. Box collision, obstacles, decals and everything else use XML segment files.

Stored file format

The files are compressed in the ZLIB format (RFC 1950). After decompressing, the files consist of a list of vertices and indexes.

Layout

Here is the exact format of the file:

Vertex Count (4 bytes, int32_t)
The number of vertices of in the file
Vertex Data (24 bytes each)
Type float int32_t
Name (X) (Y) (Z) (U) (V) (R) (G) (B) (A)
Range [-inf, inf] [0.0, 1.0] [0, 127] [255, 0]
Description 4 bytes: coordinate 4 bytes: texture coordinate 1 byte: red color 1 byte: green color 1 byte: blue color 1 byte: shadow
Index Count (4 bytes, int32_t)
The number of indexes in the file
Index Data (12 bytes each)
Name (D) (E) (F)
Type uint32_t
Description Index of triangle vertex

Structures

This is not what is used in the game.

Vertex Header

struct Mesh_Vertex_Header {
    uint32_t size;
};

Vertex

struct Mesh_Vertex {
    float x;
    float y;
    float z;
    float u;
    float v;
    uint8_t r;
    uint8_t g;
    uint8_t b;
    uint8_t a;
};

Index Header

struct Mesh_Index_Header {
    uint32_t size;
};

Index

struct Mesh_Triangle {
    int32_t d;
    int32_t e;
    int32_t f;
};

Loading

There is not a single function in the game that loads the mesh files. They appear to be loaded manually each time. The general process is:

void GameFunction(...) {
    // Open the file stream ...
    
    uint32_t vertex_count;
    
    // Read vertex data count
    stream.readInt32(&vertex_count);
    
    // Allocate vertex data
    void *vertex_data = QiAlloc(vertex_count * 24);
    
    // Load vertex data into memory
    for (size_t i = 0; i < vertex_count; i++) {
        stream.readFloat(&vertex_data[(i * 0x18) + 0x00]);
        stream.readFloat(&vertex_data[(i * 0x18) + 0x04]);
        stream.readFloat(&vertex_data[(i * 0x18) + 0x08]);
        stream.readFloat(&vertex_data[(i * 0x18) + 0x0c]);
        stream.readFloat(&vertex_data[(i * 0x18) + 0x10]);
        stream.readInt32(&vertex_data[(i * 0x18) + 0x14]);
    }
    
    uint32_t index_count;
    
    // Read index data count
    stream.readInt32(&index_count);
    
    // Allocate vertex data
    void *index_data = QiAlloc(index_count * 12);
    
    // Load index data into memory
    for (size_t i = 0; i < index_count; i++) {
        stream.readInt32(&index_data[(i * 0xc) + 0x0]);
        stream.readInt32(&index_data[(i * 0xc) + 0x4]);
        stream.readInt32(&index_data[(i * 0xc) + 0x8]);
    }
    
    // Clean up and push data to gpu...
}

Textures

The textures are only stored in one file: tiles.png.mtx. The texture coordinates are set up in such a way that the tile ID is just used to find the texture coordinates of the tile.

Baking

Normally, the mesh files would be baked alongside the segment itself in the editor or using a developer menu. It appears that the LitMesh class was used for this during development. While the class is still present in the final libsmashhit.so, it is not possible to use it to directly bake a mesh file (or rather, it would be very inconvenient to do that), so mesh baking must be done using a third party utility.

The first step of baking is usually to load the segment and templates files. With this, the boxes can then be processed and converted to a triangle mesh. The first phrase of this usually involves subdividing the surfaces of the box into smaller quads, usually the size of one tile. After this, the textures and shading can be applied to the vertices and they can be serialised to a mesh file.

Texture coordinates

The texture coordinates are OpenGL texture coordinates, like . Note that the texture will be loaded flipped and will not be corrected, making the texture coordinates more "normal" in that the tile at has the texture coordinates , , and .

In Smash Hit, the texture coordinate are cropped, most likely so that the edges of tiles don't bleed into other tiles. By default, Smash Hit cuts off 0.03125 of a tile on each side. This is why there is no tile in tiles.mtx that looks exactly like the tiles seen in the starting room: they are cropped by enough of a factor to cause part of the tile to not be seen, thus making it look a bit different than what the tile actually looks like by just looking at the texture.

Calculations

To calculate the coordinates of a texture on the grid, use integer division to find the row and modulo to find the column. After you have found these, divide them by the count of rows and columns to get the texture coordinates in the range [0, 1]. For example, to find the top left coordinate of a tile:

ROW_COUNT = 8
COL_COUNT = 8
TILE_HEIGHT = 1 / COL_COUNT
TILE_WIDTH = 1 / ROW_COUNT

def get_tile_uv(tile):
    row = tile // ROW_COUNT
    col = tile % ROW_COUNT
    row = row / ROW_COUNT
    col = col / COL_COUNT
    return (row, col)

Or expressed mathematically:

The other points are , and for the bottom left, bottom right and top right coordinates respectively, where and are the width and height of a single tile in texture coordinates (which are the inverse of the row and column counts, respectively).

Colours

Colours are a bit weird in mesh files. The RGB channels are eight bit integers within the range , though they are normalised (converted to floats then divided by 255) and multiplied by two in the shaders (see line 27 in the shaders section), so only the values in the range (or after normalisation) will produce colours that are not an overblown white. The same is not true for the A channel, which must represent everything in , otherwise everything will look overly dark. The A channel will be squared when it factored into the colour of the mesh, so a value of 127 (about 0.5) will actually make the colours 0.25 darker (multiply by ), not 0.5.

Note that the colours will be gamma corrected, so it should not already be corrected.

Formulas

Note: Don't just blindly use these if you don't know what they mean. They are only intended for reference purposes.

Overall, to convert a colour from the color property to a mesh colour, use:

where is the fourth component (otherwise called a). Since floating point to integer rounds automatically, this can be implemented as:

def convert_colour(r, g, b, a):
    r = int(r * 127)
    g = int(g * 127)
    b = int(b * 127)
    a = int(a * 255)
    return r, g, b, a

Lighting

As part of baking a mesh, Smash Hit applies per-vertex lighting.

Before actual lighting starts, the game computes some equally distributed points on a unit hemisphere. This is done using an algorithm that Dennis has previously provided and taken down source code for, but has been reuploaded as a github gist.

Decompilation

Note: This section is not finished and may not be an accurate representation of what is happening.

This section contains an incomplete decompilation of LitMesh for reference purposes.

Lighting

Here is some incomplete pseudocode that shows how LitMesh::getLight works:

bool LitMesh::raycast(QiVec3& origin, QiVec3& direction, QiVec3& first_hit);
    /**
     * Computes a raycast and its first hit
     * 
     * @param origin Origin of the ray
     * @param direction Direction of the ray
     * @param first_hit The location of the first hit of the ray
     * @return Non-zero, where zero is not hitting anything
     */

float LitMesh::getLight(QiVec3& position, QiVec3& normal, undefined4 in_r3, float in_s0) {
    /**
     * Get the alpha value at a point in the scene.
     * 
     * @param this <r0> Pointer to the current LitMesh instance (implict)
     * @param position <r1> Position of the point to get the light around
     * @param normal <r2> Normal vector for the surface (maybe?)
     * @param in_r3 <r3> The use of in_r3 is not known
     * @param in_s0 <s0> The use of in_s0 is not known
     * @return Total shadow (is equal to the alpha channel)
     */
    
    // Initialise unit sphere points, if not already done
    if (!gSpherePointsInitialised) {
        distributePointsOnUnitSphere(1000, 64, &gSpherePoints, 12, true);
    }
    
    // There is more before the main of this function, but it
    // does not seem to be the most important for us.
    
    // Advance by 0.02 in the direction facing out of the surface
    // (the surface normal)
    position += (0.02f * normal);
    
    // The accumulator for recording raycast hits
    float light = 0.0f;
    
    // Raycast in the direction of each point
    for (size_t i = 0; i < 64; i++) {
        QiVec3 origin, direction, first_hit;
        
        // Get the sphere point and call it our base ray
        QiVec3 base_ray = gSpherePoints[i];
        
        // Rotate the sphere point to align to the side
        base_ray = RotateTo(normal, base_ray);
        
        // Calculate the origin of the raycast
        origin = position + ((float)in_r3 * direction);
        
        // Calculate the direction of the raycast
        direction = position + (0.02f * base_ray);
        
        // Preform the raycast
        bool hit = this->raycast(direction, origin, first_hit);
        
        // If the raycast has been hit
        if (hit) {
            // Subtract hit position from current base position
            first_hit = first_hit - position;
            
            // Compute the dot product between the hit and direction
            // This computes the facing ratio (do a search for it)
            float cosine = dot(first_hit, direction) / in_s19; // in_s19 == ???
            
            // Anything which faces away from the light gets zero
            // light, not negative light (part of facing ratio)
            light += fmax(cosine, 0.0f);
        }
    }
    
    // Normalise the amount with respect to the number of rays tested.
    // Remember that max(cosine, 0) will not return anything more than 1.0 so
    // this is always in the range [0.0, 1.0]
    light *= (1.0f / 64.0f);
    
    // Return 1.0 - sqrt(light) which is shadow channel (alpha)
    return 1.0f - powf(light, 0.5f);
}

It is interesting to note that since we return here and we do later in the graphics pipeline at runtime we get that (not accounting for any gamma correction).

Shaders

The shaders assets/shaders/room.glsl and assets/shaders/roomlow.glsl are used to draw the room in higher and low qualities, respectively. The higher-quality shader is shown below.

uniform mat4 uMvpMatrix;
uniform sampler2D uTexture0;
uniform vec4 uColor;
uniform vec4 uLowerFog;
uniform vec4 uUpperFog;

varying vec4 vColor;
varying vec2 vTexCoord;
varying float vShadow;
varying vec4 vFog;

#ifdef VERTEX
attribute vec3 aPosition;
attribute vec2 aTexCoord;
attribute vec4 aColor;

void main(void)
{
	gl_Position = uMvpMatrix * vec4(aPosition, 1.0);

	float nearPlane = 0.4;
	vec4 upperFog = uUpperFog;
	vec4 lowerFog = uLowerFog;
	float t = gl_Position.y / (gl_Position.z+nearPlane) * 0.5 + 0.5;
	vec4 fogColor = mix(lowerFog, upperFog, t);
	float fog = clamp(0.05 * (-5.0 + gl_Position.z), 0.0, 1.0);
	vColor =  vec4(aColor.rgb, 0.5) * (2.0 * (1.0-fog));
	vFog = fogColor * fog;

	vShadow = 1.0-aColor.a;
	vTexCoord = aTexCoord;
}
#endif

#ifdef FRAGMENT
void main(void) 
{
	float light = (1.0-vShadow*vShadow);
	gl_FragColor = texture2D(uTexture0, vTexCoord) * vColor * light + vFog;
}
#endif

Gallery

Further reading

  1. wikipedia:Polygon_mesh
  2. wikipedia:Texture_mapping
  3. Chapter from The Cg Tutorial on Lighting
  4. Introduction to Shading (Normals, Vertex Normals and Facing Ratio) from Scratchapixel

Reference material

  1. bake_mesh.py source code (from blender tools)
  2. Source code for distributePointsOnUnitSphere (reupload): https://gist.github.com/knot126/73d7b48e7617368e3f5c7bf9852e967f