VAST-AI-Research / TripoSR

MIT License
4.6k stars 535 forks source link

Marching cubes meshing is done incorrectly(?) #12

Open AdamFrisby opened 8 months ago

AdamFrisby commented 8 months ago

First up - big kudos. Really great work, by far the best I've seen.

I just wanted to point out that the generation of the mesh from the field is being done incorrectly; there's probably an 'off-by-0.5' error somewhere in the generation of the mesh via marching cubes (I haven't had a chance to look myself yet - but will when I have some free time). I suspect the underlying model is just fine.

You can see the result of the sample generation here: image

If you look at the cushion, you will see a terracing effect - this is a sign that the sampling method used in the cubes is duplicating reads of the edges of a 'cell', instead of reading into the neighbourhood correctly (oft caused by chunking); it can also be caused by rounding of coordinates.

This same issue also seems to be affecting the generation of the normals.

mr-lab commented 8 months ago

you may be on to something image reading https://github.com/VAST-AI-Research/TripoSR/blob/main/tsr/system.py line 163 and after will see if the marching cubes not working correctly or simply an issue with the resolution 256 it could be the threshold as well , will post any updates , i just need to install it .

bennyguo commented 8 months ago

I think it could be due to marching cubes with resolution 256. Could you please try with higher resolutions and see if things get better? To use different resolutions, simply change model.extract_mesh(scene_codes) to model.extract_mesh(scene_codes, resolution=some_integer).

AdamFrisby commented 8 months ago

I've taken a brief look last night and think it might be an issue upstream in the torchmcubes library.

I'm planning on taking a look this weekend and will see if I can resolve. One thing that could be helpful is exporting a 3D array to disk -- that way I can test the output in another mesher.

mr-lab commented 8 months ago

kind of better results image system.py line 171 change threshold from 25 to 5 i think this works for solid objects like an apple resolutions wont fix it 512 takes about 21gb vrm and 1024 about 44GB when generating a complex object like a tree best not to go lower then threshold 10 or it will be a big blob the torchmcubes or isosurface_helper have something wrong , or it could be that's how it works . still can get my head around bunch of things scene_codes is what type , what happens in isosurface_helper ... will see what i can do to replace it with a better reconstruction algo like directly in trimesh or a pointcloud to mesh method i just need to find the 3d points cheers

andybak commented 8 months ago

simply change model.extract_mesh(scene_codes) to model.extract_mesh(scene_codes, resolution=some_integer).

Any chance this could be exposed in the Gradio UI?

andybak commented 8 months ago

I've taken a brief look last night and think it might be an issue upstream in the torchmcubes library.

I'm planning on taking a look this weekend and will see if I can resolve. One thing that could be helpful is exporting a 3D array to disk -- that way I can test the output in another mesher.

Any options to export an internal representation would be useful. Meshes are the lowest common denominator and it's interesting to start thinking of ways to get output between different apps and platforms without baking out a mesh.

mr-lab commented 8 months ago

simply change model.extract_mesh(scene_codes) to model.extract_mesh(scene_codes, resolution=some_integer).

Any chance this could be exposed in the Gradio UI?

here is how to fully expose it in the UI add those lines after line 107 in gradio app .py

                resolution= gr.Slider(
                    label="resolution of mesh",
                    minimum=128,
                    maximum=512,
                    value=256,
                    step=64,
                )
                threshold = gr.Slider(
                    label="threshold  of merging the marching cubes cells or what ever",
                    minimum=1,
                    maximum=100,
                    value=25,
                    step=1,
                )

then in line 160: inputs=[processed_image],

change to : inputs=[processed_image,resolution,threshold],

line 58 generate(image): change to generate(image,resolution,threshold):

then in line 60 mesh = model.extract_mesh(scene_codes)[0] change to : mesh = model.extract_mesh(scene_codes,resolution,threshold )[0]

still wont change the quality of the mesh just how dense it is , will give it a try again perhaps threshold need to be 25 or resolution/2.

will test that when get back to the desk

mr-lab commented 8 months ago

I think it could be due to marching cubes with resolution 256. Could you please try with higher resolutions and see if things get better? To use different resolutions, simply change model.extract_mesh(scene_codes) to model.extract_mesh(scene_codes, resolution=some_integer).

image

resolutions have no effects . it the cell merging as AdamFrisby said

AdamFrisby commented 8 months ago

Threshold should actually be more like 0. It's probably intended as a basic de-noising mechanism; but it's going to cause weird cutoffs.

It might be worth testing plugging in an alternative de-noiser, like this one: https://github.com/hkuadithya/CUDA-NLML-MRI-Denoising

mr-lab commented 8 months ago

Threshold should actually be more like 0. It's probably intended as a basic de-noising mechanism; but it's going to cause weird cutoffs.

It might be worth testing plugging in an alternative de-noiser, like this one: https://github.com/hkuadithya/CUDA-NLML-MRI-Denoising

at 0.1 is just a big blob image it mess up the color vertex as well 0 is just an error cannot reshape tensor Threshold is simply giving a set of points , how far/bad those points are before we completely disregard them from the final mesh

mrbid commented 8 months ago

kind of better results image system.py line 171 change threshold from 25 to 5 i think this works for solid objects like an apple resolutions wont fix it 512 takes about 21gb vrm and 1024 about 44GB when generating a complex object like a tree best not to go lower then threshold 10 or it will be a big blob the torchmcubes or isosurface_helper have something wrong , or it could be that's how it works . still can get my head around bunch of things scene_codes is what type , what happens in isosurface_helper ... will see what i can do to replace it with a better reconstruction algo like directly in trimesh or a pointcloud to mesh method i just need to find the 3d points cheers

1024 at 44GB ?

https://github.com/VAST-AI-Research/TripoSR/assets/78346668/ff600209-f4f8-406b-8d19-3a83481dea66

Almost :wink:

Morbid Interest: image

I don't think the front is as noticeably different other than around the eyes and mouth: image

The model files for reference: 256+1024.zip

The source input image as reference: cutecat Reference: https://civitai.com/images/6932312

Basic threshold comparison: image

mr-lab commented 8 months ago

kind of better results image system.py line 171 change threshold from 25 to 5 i think this works for solid objects like an apple resolutions wont fix it 512 takes about 21gb vrm and 1024 about 44GB when generating a complex object like a tree best not to go lower then threshold 10 or it will be a big blob the torchmcubes or isosurface_helper have something wrong , or it could be that's how it works . still can get my head around bunch of things scene_codes is what type , what happens in isosurface_helper ... will see what i can do to replace it with a better reconstruction algo like directly in trimesh or a pointcloud to mesh method i just need to find the 3d points cheers

1024 at 44GB ?

Peek.2024-03-06.14-35.mp4 Almost 😉

thank you for verifying the resolutions , as for the threshold if the subject has multiple sub mesh islands like the hair on that cat or branches on a tree it will be bad plus no correct vertex colors anyways, 10 is the best value and use lower values for simple objects like a box apple or a chair , it's a hit or miss thing

tldr , the 3d points generated from this model is solid gold the mesh generation has an issue . will look into adding https://github.com/ranahanocka/point2mesh or something else ... i have tested a basic C# autorepair "geometry3Sharp" input : tmpg_czs2s7

https://github.com/VAST-AI-Research/TripoSR/assets/16530175/3ec767c1-ef08-47cf-969f-ae8efe67d574