NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.09k stars 222 forks source link

The results generated by the nerd dataset are poor #146

Open HeptagramV opened 8 months ago

HeptagramV commented 8 months ago

This is my result and configuration file. Does anyone know how to solve it or have the same problem as me???

image image

{ "ref_mesh": "data/nerd/moldGoldCape_rescaled", "isosurface" : "flexicubes", "random_textures": true, "iter": 5000, "save_interval": 100, "texture_res": [ 2048, 2048 ], "train_res": [512, 512], "batch": 8, "learning_rate": [0.03, 0.01], "dmtet_grid" : 128, "mesh_scale" : 5, "kd_min" : [0.03, 0.03, 0.03], "kd_max" : [0.8, 0.8, 0.8], "ks_min" : [0, 0.08, 0.0], "ks_max" : [0, 1.0, 1.0], "background" : "white", "display" : [{"bsdf":"kd"}, {"bsdf":"ks"}, {"bsdf" : "normal"}], "out_dir": "nerd_gold_flexi" }

jmunkberg commented 8 months ago

Hello @HeptagramV ,

If the result from our provided configs for the nerd datasets, e.g., https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json looks as expected, please consider renaming the issue to clarify that this is an issue related to using flexicubes for isosurfacing.

The following config with flexicubes produced decent results in my test today.

{
    "ref_mesh": "data/nerd/moldGoldCape_rescaled",
    "random_textures": true,
    "isosurface" : "flexicubes",
    "iter": 5000,
    "save_interval": 100,
    "texture_res": [ 1024, 1024 ],
    "train_res": [512, 512],
    "batch": 8,
    "learning_rate": [0.03, 0.005],
    "dmtet_grid" : 96,
    "mesh_scale" : 2.5,
    "kd_min" : [0.03, 0.03, 0.03],
    "kd_max" : [0.8, 0.8, 0.8],
    "ks_min" : [0, 0.08, 0.0],
    "ks_max" : [0, 1.0, 1.0],
    "background" : "white",
    "display" : [{"bsdf":"kd"}, {"bsdf":"ks"}, {"bsdf" : "normal"}],
    "out_dir": "nerd_gold_flexi"
}

val_000001_opt val_000001_ref

HeptagramV commented 8 months ago

你好 ,

如果我们为书数据集提供的配置的结果,例如,https://github.com/NVlabs/nvdiffrec/blob/main/configs/nerd_gold.json 看起来符合预期,请考虑重命名问题以澄清这是与使用弹性立方体进行等面相关的问题。

以下带有灵活立方体的配置在我今天的测试中产生了不错的结果。

{
    "ref_mesh": "data/nerd/moldGoldCape_rescaled",
    "random_textures": true,
    "isosurface" : "flexicubes",
    "iter": 5000,
    "save_interval": 100,
    "texture_res": [ 1024, 1024 ],
    "train_res": [512, 512],
    "batch": 8,
    "learning_rate": [0.03, 0.005],
    "dmtet_grid" : 96,
    "mesh_scale" : 2.5,
    "kd_min" : [0.03, 0.03, 0.03],
    "kd_max" : [0.8, 0.8, 0.8],
    "ks_min" : [0, 0.08, 0.0],
    "ks_max" : [0, 1.0, 1.0],
    "background" : "white",
    "display" : [{"bsdf":"kd"}, {"bsdf":"ks"}, {"bsdf" : "normal"}],
    "out_dir": "nerd_gold_flexi"
}

val_000001_opt val_000001_ref

Thank you very much for your project and response. I ran with the configuration you provided today and found that the exported grid still has poor performance. Is this because I need to adjust additional settings in the export section?

This is the grid I exported: 1697523747227 image

jmunkberg commented 8 months ago

I'm not sure what you are using for visualizing the model. Do the images dumped from the nvdiffrec renderer during training look ok? For visualization of the exported models, you need to carefully check that the coordinate frames, material textures and tangent space matches.

We provide a script for this for blender in nvdiffrecmc, https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender and I think that should work also for nvdiffrec.

HeptagramV commented 8 months ago

I'm not sure what you are using for visualizing the model. Do the images dumped from the nvdiffrec renderer during training look ok? For visualization of the exported models, you need to carefully check that the coordinate frames, material textures and tangent space matches.

We provide a script for this for blender in nvdiffrecmc, https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender and I think that should work also for nvdiffrec.

Thank you for your prompt response. I am very grateful for your help! I tried to load the model in Blender, and this is the result I achieved: image image

I am also puzzled that there is a significant difference between the output images and the exported mesh during the training process: in these images, the inner side of this object clearly does not have the rough part in the mesh I obtained, which is as smooth as the result you provided earlier; image