autonomousvision / sdfstudio

A Unified Framework for Surface Reconstruction
Apache License 2.0
1.91k stars 182 forks source link

Mesh Deformation at margin #74

Open HungNgoCT opened 1 year ago

HungNgoCT commented 1 year ago

Hi @niujinshuchong , and all

Thank you for great work. However, I have a problem and need help.

I trained neus-facto with cli: ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model mlp --trainer.max-num-iterations 60000 --pipeline.model.near-plane 1.0 --pipeline.model.far-plane 10.0 sdfstudio-data --data sdfstudio\data\chair --auto-orient True

Render was fine like this https://user-images.githubusercontent.com/110378166/229024016-2766f741-1254-4c45-ae9b-b4c32aa47b8a.mp4

Then, I extracted mesh: ns-extract-mesh --load-config --resolution 1024 --bounding-box-min -2.0 -2.0 -2.0 --bounding-box-max 2.0 2.0 2.0 --output-path Obtained a mesh like this in the video and an image below https://user-images.githubusercontent.com/110378166/229024388-0ab58c5d-14db-43b6-b7f9-80e05942ea64.mp4 image

It seems the chair was bended at its bounding box to a sphere. My other results are also bended like this.

@niujinshuchong or any one can give me advice on this? I think some of parameters are not good and I have not yet understand these necessary parameters.

Some images data here image

I am sharing here the data with meta_data.json file if you need to check https://drive.google.com/file/d/18k7-k5XHeZoOzsq9Pbg8PcJdPuWDSNZm/view?usp=sharing

Highly appreciate for your help.

niujinshuchong commented 1 year ago

Hi, this seems to related to the contraction function used during training. Could you try to apply the inverse contraction to the extracted mesh's vertices to see whether it's fixed or not? Please check here: https://github.com/autonomousvision/sdfstudio/blob/master/scripts/extract_mesh.py#L77-L82 and here: https://github.com/autonomousvision/sdfstudio/blob/master/nerfstudio/utils/marching_cubes.py#L322-L325

Or you could run ns-extract-mesh with --create_visibility_mask True which will apply inv_contraction after extract the mesh.

Please let me know what you get and I will try to make a update later.

HungNgoCT commented 1 year ago

Hi, this seems to related to the contraction function used during training. Could you try to apply the inverse contraction to the extracted mesh's vertices to see whether it's fixed or not? Please check here: https://github.com/autonomousvision/sdfstudio/blob/master/scripts/extract_mesh.py#L77-L82 and here: https://github.com/autonomousvision/sdfstudio/blob/master/nerfstudio/utils/marching_cubes.py#L322-L325

Or you could run ns-extract-mesh with --create_visibility_mask True which will apply inv_contraction after extract the mesh.

Please let me know what you get and I will try to make a update later.

Hi @niujinshuchong ,

As you predicted, when using --create_visibility_mask True , the deformation is corrected. However, mesh is flat regions such as floor and object's surrounding becomes not good. Please check a new result like image and video below.

![image](https:

https://user-images.githubusercontent.com/110378166/229105170-252c3e06-7f30-4c97-9d70-8f3fbb8a597a.mp4

//user-images.githubusercontent.com/110378166/229105127-c96988af-7e00-401a-a094-f9df9ada18ab.png)

Do you have any advice on this so that I can get a better mesh?

niujinshuchong commented 1 year ago

@HungNgoCT could you also try the other way I mentioned above to extract the mesh?

HungNgoCT commented 1 year ago

@HungNgoCT could you also try the other way I mentioned above to extract the mesh? Sure. Let me try and let you know

HungNgoCT commented 1 year ago

@HungNgoCT could you also try the other way I mentioned above to extract the mesh?

Hi @niujinshuchong ,

I tested, and all the 3 functions get_surface_occupancy(), get_surface_sliding(), and of course get_surface_sliding_with_contraction() need inv_contraction function to avoid distortion. The result is similar as the video above. The result for whole scene is shown on the left. However, reconstruction mesh in the regions where mag >=1 (in the inv_contraction function()) is not good enough, as the image I show here on the right

Do you have ideas to improve mesh quality for these regions? image

niujinshuchong commented 1 year ago

@HungNgoCT If part of the chair is not good just because it's outside [-1, 1], then you could simply scale the poses such that ROI is inside [-1, 1]. I also found that some part of the meshes that are contracted is not smooth compared to the middle region when I was experiencing with BakedSDF. So I added a spatial varying eikonal loss where I use a larger weights of eikonal loss if a point is far from the origin. Please check it here: https://github.com/autonomousvision/sdfstudio/blob/master/nerfstudio/models/bakedsdf.py#L254-L269 and it might be helpful in your cases.

HungNgoCT commented 1 year ago

@HungNgoCT If part of the chair is not good just because it's outside [-1, 1], then you could simply scale the poses such that ROI is inside [-1, 1]. I also found that some part of the meshes that are contracted is not smooth compared to the middle region when I was experiencing with BakedSDF. So I added a spatial varying eikonal loss where I use a larger weights of eikonal loss if a point is far from the origin. Please check it here: https://github.com/autonomousvision/sdfstudio/blob/master/nerfstudio/models/bakedsdf.py#L254-L269 and it might be helpful in your cases.

Thank you @niujinshuchong In my case, I want to keep object scale, so I changed radius R in contraction formular to a bigger value. Then, results become better like the left image. By the way, I see that reconstructed mesh is usually not very detail. One result I got as in the image on the right side. You can see meshes of the chair feet regions are connected to floor. Could you give advice which parameters I should adjust to improve this? I tested with resolution 512, 1024, 2048 already, but do not help.

image

niujinshuchong commented 1 year ago

@HungNgoCT that's a difficult problem. Maybe these region is not observed enough in your training images. Do you also try different scene representations, e.g. pure MLP?