Open wuge1880 opened 2 years ago
Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.
For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender
Hello, when I run your code with a fixed mesh, I found the the color of the final Kd texture is very different from the original photos, as shown in follows:
original photo Kd texture
You can see that the color of Kd texture is much lighter than that of the original photo, is it a normal result? If not, how should I ajust it ?
Besides,when I rendered the final results in blender using the script you provided, the material always looked like metal( the real material is plastics). But the rending results in "img_mesh_pass" looks perfect. How should I get a more realistic result using blender (or other software)?
Thank you in advance !
rendering results in blender
rendering results in "img_mesh_pass"
I would like to know whether it is convenient to inform the construction method of the data set and the parameters of the training, thanks.
I reconstructed the model using MVS method, the training parameters were same to that of nerd_gold.json https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json
Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.
For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender
Thanks for your reply, I will try it !
You can also consider giving a hint to the optimizer through the config. It's possible to limit the optimization range for the kd/ks parameters. You can set ks_max : [0, 1, 0]
to force plastic only materials (the third ks parameter controls metalness).
You can also consider giving a hint to the optimizer through the config. It's possible to limit the optimization range for the kd/ks parameters. You can set
ks_max : [0, 1, 0]
to force plastic only materials (the third ks parameter controls metalness).
Thanks for your advice!
djust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examp
Hi, I found that the env_hdr could be loaded in your code. I'd like to know whether the
Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.
For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender
Hi, I found the env_light file could be loaded in your code. I'd like to know is it possible to get a better Albedo brightness if we load the HDR before training? Thanks.
Yes, if you only optimize shape and topology, not lighting, you should get higher quality results, as it is an easier optimization task, with less degrees of freedom.
To test this, add the following two lines to the config:
"envmap" : [path to a .hdr file]
"learn_light" : false
Yes, if you only optimize shape and topology, not lighting, you should get higher quality results, as it is an easier optimization task, with less degrees of freedom.
To test this, add the following two lines to the config:
"envmap" : [path to a .hdr file] "learn_light" : false
If I only have imgs, I need estimate the enmap from other method then use the nvdiffrec to predict material or directly predict env map and material. Thanks.
Hello, when I run your code with a fixed mesh, I found the the color of the final Kd texture is very different from the original photos, as shown in follows:
original photo Kd texture
You can see that the color of Kd texture is much lighter than that of the original photo, is it a normal result? If not, how should I ajust it ?
Besides,when I rendered the final results in blender using the script you provided, the material always looked like metal( the real material is plastics). But the rending results in "img_mesh_pass" looks perfect. How should I get a more realistic result using blender (or other software)?
Thank you in advance !
rendering results in blender
rendering results in "img_mesh_pass"