NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.15k stars 224 forks source link

The color of Kd texture is very different from the photos. #91

Open wuge1880 opened 2 years ago

wuge1880 commented 2 years ago

Hello, when I run your code with a fixed mesh, I found the the color of the final Kd texture is very different from the original photos, as shown in follows:

original photo IMG_20221109_155652 Kd texture texture_kd

You can see that the color of Kd texture is much lighter than that of the original photo, is it a normal result? If not, how should I ajust it ?

Besides,when I rendered the final results in blender using the script you provided, the material always looked like metal( the real material is plastics). But the rending results in "img_mesh_pass" looks perfect. How should I get a more realistic result using blender (or other software)?

Thank you in advance !

rendering results in blender image

rendering results in "img_mesh_pass" img_mesh_pass_000013

jmunkberg commented 2 years ago

Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.

For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender

wyj302 commented 2 years ago

Hello, when I run your code with a fixed mesh, I found the the color of the final Kd texture is very different from the original photos, as shown in follows:

original photo IMG_20221109_155652 Kd texture texture_kd

You can see that the color of Kd texture is much lighter than that of the original photo, is it a normal result? If not, how should I ajust it ?

Besides,when I rendered the final results in blender using the script you provided, the material always looked like metal( the real material is plastics). But the rending results in "img_mesh_pass" looks perfect. How should I get a more realistic result using blender (or other software)?

Thank you in advance !

rendering results in blender image

rendering results in "img_mesh_pass" img_mesh_pass_000013

I would like to know whether it is convenient to inform the construction method of the data set and the parameters of the training, thanks.

wuge1880 commented 2 years ago

I reconstructed the model using MVS method, the training parameters were same to that of nerd_gold.json https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json

wuge1880 commented 2 years ago

Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.

For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender

Thanks for your reply, I will try it !

JHnvidia commented 2 years ago

You can also consider giving a hint to the optimizer through the config. It's possible to limit the optimization range for the kd/ks parameters. You can set ks_max : [0, 1, 0] to force plastic only materials (the third ks parameter controls metalness).

wuge1880 commented 2 years ago

You can also consider giving a hint to the optimizer through the config. It's possible to limit the optimization range for the kd/ks parameters. You can set ks_max : [0, 1, 0] to force plastic only materials (the third ks parameter controls metalness).

Thanks for your advice!

wuge1880 commented 1 year ago

djust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examp

Hi, I found that the env_hdr could be loaded in your code. I'd like to know whether the

Nice model! Albedo brightness is fairly tricky in joint optimization of light and material, as the optimization process can decide to either explain it with a brighter material or brighter light. When working from real photographs, the settings here have worked ok for us https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json You could also try to adjust the light regularizer: https://github.com/NVlabs/nvdiffrec/blob/main/geometry/dlmesh.py#L83, but we kept it constant for the examples in the paper.

For Blender, did you follow the description in the nvdiffrecmc code base? That is the setup/shader network we are using. https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender

Hi, I found the env_light file could be loaded in your code. I'd like to know is it possible to get a better Albedo brightness if we load the HDR before training? Thanks.

jmunkberg commented 1 year ago

Yes, if you only optimize shape and topology, not lighting, you should get higher quality results, as it is an easier optimization task, with less degrees of freedom.

To test this, add the following two lines to the config:

"envmap" : [path to a .hdr file]
"learn_light" : false
renrenzsbbb commented 1 year ago

Yes, if you only optimize shape and topology, not lighting, you should get higher quality results, as it is an easier optimization task, with less degrees of freedom.

To test this, add the following two lines to the config:

"envmap" : [path to a .hdr file]
"learn_light" : false

If I only have imgs, I need estimate the enmap from other method then use the nvdiffrec to predict material or directly predict env map and material. Thanks.