NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.09k stars 222 forks source link

Can‘t understand "mesh_scale" #97

Open cjlunmh opened 1 year ago

cjlunmh commented 1 year ago

hello,Thank you for sharing your great work!In the previous issues, you said that this parameter “mesh_scale” represents the size of the tetrahedron。when I test on other dataset(human public dataset), i find some trouble。when i set “mesh_scale” as 3.0,the result as follow image when i set “mesh_scale” as 10 ,the result as follow image. The results are from pass1. I don't understand the influence of this parameter on reconstruction. Why is it completely white when the parameter is small? And in second image, could you have any good suggestions on how to prevent some areas from being cut?I want to find out the influence of Hyperparameter on training(in .json file) Looking forward to your reply!

jmunkberg commented 1 year ago

Hello,

Mesh scale scales the tetrahedral grid. For best results, adjust it so that is fairly tightly covers the model. If you look at the output of early training iterations, you can see the scale of the grid. We only optimize geometry inside the grid, so if mesh_scale is too small, you may cut the model. If it is too large, you will have lower geometric details (triangles are larger). Also, if working on your own data, please ensure that your model is properly centered.

For the llff dataset reader, we added auto-centering, so you can add something similar to your reader in case it is needed for your data https://github.com/NVlabs/nvdiffrec/blob/main/dataset/dataset_llff.py#L62

Some more discussion about mesh_scale https://github.com/NVlabs/nvdiffrec/issues/90