NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.13k stars 223 forks source link

Appearance-Aware NeRF Extractor #136

Closed marcostrinca closed 1 year ago

marcostrinca commented 1 year ago

Hi. Thanks very much for the amazing work. I'm trying to reproduce the results of the supplemental material, especially the NERF 3D extractor. I've prepared the dataset, successfully trained on instang-ngp and exported the mesh. I double checked the mesh integrity.

I'm not sure I'm doing the right steps to train the nerf-extracted mesh on nvdiffrec. I'm trying to use the nerf-extracted mesh as the base_mesh and the folder with the original images and masks as ref_mesh but I got an error in mesh.py, probably related to textures. I've tried creating an .mtl material and setting mtl_override to this material, but the error persists.

Below my config file and also the error log:

Can you give me more information about the process you've used with the Damicornis dataset and also comment about the error above? I'm wondering if the error is because something I'm doing wrong in this entire process.

jmunkberg commented 1 year ago

Thanks @marcostrinca ,

From the log, I suspect that the mesh exported from iNGP does not have uvs / texture coordinates. Open the .obj in Blender, add some automatic UVs with the UV->Unwrap or UV->Smart UV Project, and save the resulting models as .obj. Out method works best with large atlases, and all uvs in the unit box.

If you don't want to use Blender, you can (with some hacking in the code) use xatlas to add UVs to your .obj model. In the code, we use xatlas after the first optimization pass (when topology is drastically changing) as illustrated here https://github.com/NVlabs/nvdiffrec/blob/main/train.py#L601

marcostrinca commented 1 year ago

Hi @jmunkberg Thank you for answering my question.

I've made it work after adding the UVMaps in Blender but the model is being rendered in a different rotation. I've tried changing the coordinate system when exporting from Blender but couldn't make it.

img_mesh_pass_000000

My next attempt will be to programmatically add the UV maps using xatlas. In case you have some info about the coordinate system, I would love to know.

Best!

jmunkberg commented 1 year ago

Coordinate system changes are always a pain, but I wouldn't expect Blender to change it, if you only opened the .obj model from the first pass, added UV's and re-exported it. Make sure not to rotate or adjust the object manually before re-exporting.

We had to add a 90 degree rotation along the x-axis for the nerf datasets, as illustrated here: https://github.com/NVlabs/nvdiffrec/blob/main/dataset/dataset_nerf.py#L67 but from your image above, it seems like a 45 degree rotation, which I haven't seen in my experiments.

jmunkberg commented 1 year ago

For .obj exported from iNGP into nvdiffrec, there may be a change of coordinate frame. I haven't tested in a while, but I would try a +/- 90 degree rotation around the x-axis and/or a flipped z-axis. A 45 degree difference looks suspicious, so perhaps verify if iNGP recenters or applies any transform to the model before exporting (in which case the inverse transform has to be applied when loading the object into nvdiffrec for the poses to make sense).

marcostrinca commented 1 year ago

Hi @jmunkberg thank you so much for providing more information.

I've tried a couple transformations using Blender but I believe there is also a problem with camera settings. The attached image shows the closest position I got manually rotating the model in Blender. In this case -45 degree in x-axis, but you can see the position of the camera seems to be a bit far from the model in the z-axis, as well as some small rotation problems yet.

_-45x0y0z

I was checking the file you've mentioned (for the nerf datasets) and I saw you did calculations for projection and modelview matrices. I was wondering if those calculations were necessary to Fine Tune the models extracted from NERF, as in section 7.2 of the paper. Could you please comment about it? (Worth note I'm using exactly the same dataset (images, masks and camera poses) for instant-ngp and nvdiffrec).

And a last question: after exporting the Marching Cube mesh from NERF, did you post processed it to clean the geometry in some way, before sending to nvdiffrec?

Many thanks!

jmunkberg commented 1 year ago

As mentioned earlier, I suspect that iNGP adds some transform before exporting the mesh (recenter based on some bbox etc), so that is the first thing I would check in the iNGP code base. If a transform is missing, our finetuning will never succeed. Similarly, it is important that the projection matrix setup matches between the two renderers.

You may also want to disable auto-centering in our code. From the log above, it seems that it is enabled DatasetLLFF: auto-centering at [-0.17565379 2.3209524 0.81833005]

For the tests in Section 7.2 of the paper, we used the mesh exported from MC unmodified from an early version of iNGP, but that code base has changed a lot since.

Also, as a sanity-check: If you train from scratch (including the geometry optimization), only in nvdiffrec with your dataset (images, masks and camera poses), do you get reasonable results?

marcostrinca commented 1 year ago

Hi @jmunkberg. Thanks for providing more info and also to clarify about the tests in section 7.2.

I actually got some reasonable results training only in nvdiffrec/nvdiffremc, but the mesh usually is kind "noisy", I mean with irregular tessellations (after 20k to 30k iterations). I've tried playing with laplace_scale and sdf_regularizer but was unable to extract a really smooth shape, and my masks are good apparently.

Right now I'm trying to rewrite the code which allows me to use the same dataset both in nerfstudio and nvdiffrec to try to reproduce the results you got in section 7.2.

By the way, I saw a comment in #7 about using depth map with nvdiffrec. Do you believe using omnidata to extract depth and normals would help to significantly improve mesh quality?

marcostrinca commented 1 year ago

Hi @jmunkberg. I got success creating the new dataset class enabling nerfstudio and sdfstudio data to be loaded into nvdiffrec/mc. I'm closing this topic. Thank you so mush!