jpaver / opengametools

A set of open c++ game development tools that are lightweight, easy-to-integrate and free to use. Currently hosting a magicavoxel .vox full scene loader.
MIT License
373 stars 35 forks source link

Feature request: Merging models #16

Closed cshlin closed 3 years ago

cshlin commented 3 years ago

I'm hoping it would be an easy task to merge multiple models into a single model. I'm looking for functionality similar to MagicaVoxel's 'union' (U) function when selecting a root node.

jpaver commented 3 years ago

Hi @cshlin that would be pretty straight forward to implement. What do you see the calling interface for that looking like?

Off the top of my head...

// merges all specified input models as if they were located/oriented with the specified transforms, and returns the output transform of the merged model. If the union of all models is too large for a single model, this function fails and returns NULL. If invalid transforms (non-cardinally aligned) are provided, this function fails and returns NULL.
ogt_vox_model* ogt_vox_merge_models( ogt_vox_transform* outTransform, const ogt_vox_model** inputModels, const gt_vox_transform* inputTransforms, uint32_t numInputModels  )
{
   ...
}
cshlin commented 3 years ago

That looks good.. I'd love to use that interface instead of what I tried to hack together :)

I took a stab at it and I think I'm close on the algorithmic side, but I think I probably messed up a matrix multiplication or flipped an axis somewhere, because the parts don't exactly end up where I'd like them.. Also, I disabled the assertion for max model size, and it still seems to work? Maybe the new version of MV can support a bigger model.. Here's what I did:

  1. Load the scene using the library
  2. For each model, traverse the scene graph and apply the transforms at each node up to the root while (parent_group_index != k_invalid_group_index) { const ogt_vox_group* group = &scene->groups[parent_group_index]; instance_transform = _vox_transform_multiply(instance_transform, group->transform);
  3. Iterate through each instance and apply the transform to the model (I think I messed this up?) // b is the final instance_transform from above for (uint32_t z = 0; z < model->size_z; z++) { for (uint32_t y = 0; y < model->size_y; y++) { for (uint32_t x = 0; x < model->size_x; x++, voxel_index++) { float mat_x = (x b.m00) + (y b.m10) + (z b.m20) + (1 b.m30); float mat_y = (x b.m01) + (y b.m11) + (z b.m21) + (1 b.m31); float mat_z = (x b.m02) + (y b.m12) + (z b.m22) + (1 b.m32); float new_w = (x b.m03) + (y b.m13) + (z b.m23) + (1 b.m33); float new_x = mat_x / new_w; float new_y = mat_y / new_w; float new_z = mat_z / new_w;
  4. Determine the min/max x,y,z values from the set of all the models to create the new merged model bounding box
  5. Determine the translation needed to move the merged model to a 0,0,0 origin and apply it to the final transform from above
  6. Allocate space for the merged model
  7. Populate the merged model with the transformed voxels, mapping from the old values for (uint32_t z = 0; z < model->size_z; z++) { for (uint32_t y = 0; y < model->size_y; y++) { for (uint32_t x = 0; x < model->size_x; x++, voxel_index++) { uint8_t color_index = model->voxel_data[voxel_index]; new_voxel_data[(new_x k_stride_x) + (new_y k_stride_y) + (new_z * k_stride_z)] = color_index;
  8. replace the scene elements with a single group, instance, and the merged model
  9. Save to file using the library

Not sure where I messed up though :(

cshlin commented 3 years ago

Or come to think of it, the ability to merge everything under a specified parent group in an existing scene would be pretty sweet, instead of specifying models..

cshlin commented 3 years ago

Yeah, I'm definitely struggling with converting models to world-space based on the transform graph.. I'm not sure if I have to translate the model to a new origin first before applying the transforms or if I'm just doing the transformation wrong (multiplying each voxel by the full transformation matrix).

cshlin commented 3 years ago

I figured out my issue! I had to recenter the origin of the model before applying the transformations:

        for (uint32_t z = 0; z < model->size_z; z++) {
            for (uint32_t y = 0; y < model->size_y; y++) {
                for (uint32_t x = 0; x < model->size_x; x++, voxel_index++) {
                    int32_t mid_x = x-model->size_x/2;
                    int32_t mid_y = y-model->size_y/2;
                    int32_t mid_z = z-model->size_z/2;
cshlin commented 3 years ago

Actually, I didn't quite get it, and I haven't been able to figure it out in the last week.. Would you happen to have some time to assist?

jpaver commented 3 years ago

Sure. Can you post what you have already?

cshlin commented 3 years ago

Yes, thank you! I've taken out a few pieces of redundant code, but here's what I have... sorry for my inexperienced coding. I've left some comments in the file with your username in it. In the end I managed to get the result that I wanted with some ugh.. hardcoded adjustments, so it's no longer a critical issue, but I am curious as to why I couldn't get the right results algorithmically.

example.cpp.txt

jpaver commented 3 years ago

I took a very shallow look and it looks like a good approach and I can't see anything immediately wrong. I'll have time to take a deeper look with some debugging in the day or two. If you're ok with sharing a problematic .vox scene but don't want to include it here, feel free to drop a link to me via DM @JustinPaver on twitter (I will only use it for debugging this issue), otherwise it may be helpful to get an idea of how your scene is setup so I can repro.

Just one thought in the meanwhile...

The transforms you get out of magicavoxel will always be unit scale, and always along a cardinal axis (X, Y or Z), so in effect (m03,m13,m23,m33) will always be (0.0, 0.0, 0.0, 1.0) and new_w in logic like this will always be 1, so these divides can be removed

                int32_t mat_x = (mid_x * b.m00) + (mid_y * b.m10) + (mid_z * b.m20) + (1 * b.m30);
                int32_t mat_y = (mid_x * b.m01) + (mid_y * b.m11) + (mid_z * b.m21) + (1 * b.m31);
                int32_t mat_z = (mid_x * b.m02) + (mid_y * b.m12) + (mid_z * b.m22) + (1 * b.m32);
                int32_t new_w = (mid_x * b.m03) + (mid_y * b.m13) + (mid_z * b.m23) + (1 * b.m33);

                int32_t new_x = mat_x / new_w;
                int32_t new_y = mat_y / new_w;
                int32_t new_z = mat_z / new_w;

... and the transform logic can become:

                int32_t new_x = (mid_x * b.m00) + (mid_y * b.m10) + (mid_z * b.m20) + (1 * b.m30);
                int32_t new_y = (mid_x * b.m01) + (mid_y * b.m11) + (mid_z * b.m21) + (1 * b.m31);
                int32_t new_z = (mid_x * b.m02) + (mid_y * b.m12) + (mid_z * b.m22) + (1 * b.m32);

Also as a consequence calculate_new_bounding_box can become much simpler. You won't need to iterate through every cell within a model, you can just compute the bounding box just from the extreme model cell coordinates.

cshlin commented 3 years ago

Thanks for the tips :) I tried to send you a DM on twitter, but I think you may have them turned off for people you aren't following maybe.. I'm @charleslin) there.

jpaver commented 3 years ago

Ok, got the model, will dig into it soon.

cshlin commented 3 years ago

Hmm, that new bounding box logic definitely makes a difference, but not exactly for the better.. Not sure why things are so inconsistent. I'm finding perhaps some issues as well from mixing datatypes with the voxel coordinates and the floating point transforms.

cshlin commented 3 years ago

I figured it out and it wasn't sexy.. I'll post the solution here in just a bit once I get it pretty. But just wanted to let you know so as to not waste any extra time debugging my code for now :)

cshlin commented 3 years ago

So I thought that it was funny that the instance transforms were applied to the model centered at (size_x/2, size_y/2, size_z/2), but then once the models were merged I wasn't doing anything to account for that shift.

Turns out if the model at the root node had an original size of say [7, 7, 15] (from [-3, -3, -7] to [3, 3, 7], and then the merged model was [9, 7, 15], I needed to apply a pre-transform of [-1, 0, 0] to center the model at [-4, -3, -7] to [4, 3, 7] first.

I ended up doing this (sorry for the verbose code):

    int32_t d_min_x = merged_min_v.x - root_min_v.x;
    int32_t d_min_y = merged_min_v.y - root_min_v.y;
    int32_t d_min_z = merged_min_v.z - root_min_v.z;
    int32_t d_max_x = merged_max_v.x - root_max_v.x;
    int32_t d_max_y = merged_max_v.y - root_max_v.y;
    int32_t d_max_z = merged_max_v.z - root_max_v.z;

    int32_t total_dx = d_min_x + d_max_x;
    int32_t total_dy = d_min_y + d_max_y;
    int32_t total_dz = d_min_z + d_max_z;

    int32_t sign_dx = total_dx/abs(total_dx);
    int32_t sign_dy = total_dy/abs(total_dy);
    int32_t sign_dz = total_dz/abs(total_dz);

    int32_t ret_x = sign_dx * ((abs(total_dx)+1)/2); // c++ rounds half integers down, so we add 1
    int32_t ret_y = sign_dy * ((abs(total_dy)+1)/2); 
    int32_t ret_z = sign_dz * ((abs(total_dz)+1)/2);

    // merged_instance_transform = identity * [ret_x, ret_y, ret_z, 1];
    // then walk the node graph to root
jpaver commented 3 years ago

@cshlin I was hoping to get more time to dig in, but I regrettably did not. Glad you found the solution. Please feel free to submit a PR for a demo that does the merge if you'd like to contribute it ;)

cshlin commented 3 years ago

Ooo exciting, I've never done one before, I'll clean it up and give it a shot :)