NVlabs / nvdiffrec

Official code for the CVPR 2022 (oral) paper "Extracting Triangular 3D Models, Materials, and Lighting From Images".
Other
2.09k stars 222 forks source link

Getting textures to display correctly/closer to what the training preview shows #100

Closed constantm closed 1 year ago

constantm commented 1 year ago

Hello! Firstly, this is awesome work and I've loved playing around with it so far.

I've been generating synthetic datasets with Blender, using COLMAP to estimate camera poses from around 100 images, and then running it through nvdiffrec to get back a mesh + textures. The mesh output is pretty decent, however I'm having difficulty getting good textures. The textures look very washed out. If I look at the images being saved during training, the textures look pretty good. However, when I open the finished mesh, the textures look very washed out. Below I've attached a few crops to detail my issue:

  1. Crop of input image in img_mesh_pass_000100.png:

    Screen Shot 2022-12-04 at 11 39 52 AM
  2. Crop of training preview in img_mesh_pass_000100.png - what I would expect the end result to look like:

    Screen Shot 2022-12-04 at 11 30 53 AM
  3. Crop of mesh imported to Blender with normal and specular maps applied:

    Screen Shot 2022-12-04 at 11 44 44 AM
  4. The Blender node setup used:

    Screen Shot 2022-12-04 at 11 45 35 AM
  5. Opening the generated mesh in Meshlab has the same washed-out look:

    Screen Shot 2022-12-04 at 11 50 44 AM

I would expect the final output to look like the training previews (image 2), but I might be misunderstanding what the training previews actually are.

My config is as follows:

{
    "ref_mesh" : "nerf_dataset",
    "random_textures": true,
    "iter": 5000,
    "save_interval": 100,
    "texture_res": [ 2048, 2048 ],
    "train_res": [800, 800],
    "batch": 8,
    "learning_rate": [0.03, 0.01],
    "mesh_scale" : 2.2,
    "dmtet_grid" : 128,
    "out_dir": "out",
    "background" : "white"
}

So, my questions are:

Any input here would be greatly appreciated, thank you!

constantm commented 1 year ago

I got it working with the Blender script located in the Nvdiffrec Monte Carlo repo here: https://github.com/NVlabs/nvdiffrecmc/blob/main/blender/blender.py

sadexcavator commented 1 year ago

I've been generating synthetic datasets with Blender, using COLMAP to estimate camera poses from around 100 images, and then running it through nvdiffrec to get back a mesh + textures. The mesh output is pretty decent, however I'm having difficulty getting good textures. The textures look very washed out. If I look at the images being saved during training, the textures look pretty good. However, when I open the finished me

hi, could you please give me some details about how to use the script 'blender.py' in Blender? I haven't use this software before, and I got stuck here, thanks!

constantm commented 1 year ago

@sadexcavator it's pretty straight forward:

  1. Open the blender.py script in Blender
  2. Update the path in the script that points to the generated mesh
  3. Run the script by clicking the Play button

The result should be a mesh in your workspace with shading nodes set up correctly.