Closed Selozhd closed 1 year ago
Yes, there is a known coordinate change between Blender and nvdiffrec.
You can either rotate the object around the x-axis when importing into Blender, or add
mv = mv @ util.rotate_x(-np.pi / 2)
after line 66 in https://github.com/NVlabs/nvdiffrec/blob/main/dataset/dataset_nerf.py#L66
I hope this helps!
I added this rotation to the nerf_dataset reader in the latest commit 3faedd23813ff6a34fd69d4d5b466eb0485c70e1
With this, you should be able to run the Blender import script here: https://github.com/NVlabs/nvdiffrec/issues/21 and get correctly mapped materials and lighting in Blender.
Thank you for the fast response @jmunkberg! The change to the nerf_dataset reader does solve the problem with the coordinate frames. However, there is another point about how environment light is handled between blender and nvdiffrec. I have added the environment map I use to generate the dataset in blender and the probe.hdr that nvdiffrec estimates as the lighting. It seems like I am encountering a similar coordinate issue there as well. Note that, I am selecting cubic and equirectangular setting in the environment texture before data generation:
Just to clarify: are you saying that the lighting is still off if you load the trained model using the Blender script provided here https://github.com/NVlabs/nvdiffrec/issues/21 and render it in Blender Cycles or just that the format of the env map textures differ?
I tested it quickly yesterday after the coordinate fix, and the lighting looked ok (probe orientation relative to the object) for me in Blender.
Also, do the images dumped by nvdiffrec during training look reasonable (rendered by nvdiffrec)?
The former lighting is off when I load the trained model to blender with the learned light (probe.hdr) vs I load the model with original environment map the data is generated. I am not using script #21, but using the NeRF lego script.
The images dumped by nvdiffrec during training do look reasonable, final mesh quality is also acceptable.
Yes, as stated above, we are not using the same coordinate frame as Blender, so this is expected. The script in issue 21 only works with the exported mesh, materials and probe from nvdiffrec, as they are all learned in the same coordinate frame. The fix I added to the nerf dataset reader only rotates the model-view matrix extracted from the nerf dataset .json.
We leverage nvdiffrast, and similar to them, we follow OpenGL's coordinate systems and other conventions: https://nvlabs.github.io/nvdiffrast/#coordinate-systems
Ok, I think you may have misapprehended me a little. Let me explain the steps I am taking:
Cubic
and Equirectangular
in the options.Hello @Selozhd ,
Thanks for the clarifications and sorry for my blunt reply earlier! Busy times.
Could you try loading the nvdiffrec-extracted mesh and probe with this script https://github.com/NVlabs/nvdiffrecmc/blob/main/blender/blender.py instead and check if that works better and report back?
We just released source code for an extension of nvdiffrec (with better light and material separation), and we revised the blender loading script. It may be a rotation missing on the light probe in the old script in issue 21.
As a side note, light estimation of mostly diffuse models is fairly challenging in general.
For a "best case" light estimation (Fig 11 in the paper), you can test this config: https://github.com/NVlabs/nvdiffrec/blob/main/configs/spot_metal.json which has highly specular material and locked, simple, smooth geometry.
I just tried it with the new script, now the light estimation looks normal. The black areas in the environment map coincide fairly neatly with the black areas in the probe.hdr when rendered with the script. Congratulations on the new nvdiffrec release @jmunkberg. I have to say, I am very happy to see the added regularisation on specularity, will definitely check it out! Your comments in this issue have been most helpful, I am gonna close it now. Thanks again.
I am trying to get nvdiffrec to do light estimation in a controlled setting. I suspect that there are some coordinate frame and lighting issues. Here is the procedure I am following: First, I generate a synthetic dataset using a custom environment map from the NeRF lego script. Then, I try to train the model from the generated dataset without giving an environment map, or a reference mesh. I end up with a mesh that is flipped 90 degrees on the x-axis and the lighting is badly estimated.
Here is the result mesh and the lego before training:![lego_after](https://user-images.githubusercontent.com/41761641/192760452-1438cf7b-2b85-4ccf-b224-fb4ebad1e533.png)