NVlabs / nvdiffmodeling

Differentiable rasterization applied to 3D model simplification tasks
Other
461 stars 30 forks source link

Why is the difference between the rendered photos and the final mesh #26

Open Lucklycat opened 1 year ago

Lucklycat commented 1 year ago

Thank you very much for your series of work. I try to optimize nvdifmodeling with reference to nvifrecmc to improve training efficiency. The result is that the training speed is improved, but the effect is still not very good. I have a very different point. I hope you can answer it: The process of my experiment is to have an initial model simplified by simplygon, and 250 images rendered by high mode and high mode and their corresponding camera positions, and then use these images as the target to optimize the initial model.

From the pictures generated during the optimization process, we can see that the effect is relatively ideal. But why is the gap between the mesh imported everywhere and the tools like blender, whether in terms of model or map. image But when I import it into blender, first of all, it is very wrong from the model, eyes and chin are abnormal: image

When the map is added, the effect is even greater, image

would you like to ask whether it does not support the corresponding filtering algorithm because it is not imported into blender correctly? What software should I use to import? Why is it feasible to import the results generated by nvifrecmc into blender.

Thanks~

jmunkberg commented 1 year ago

I would recommend to start with a setup similar to https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender to get it right in Blender. That said, we did not release a blender import script for diffmodeling, so the nvdiffrecmc version may differ a bit. In diffmodeling, we do not capture lights, only materials and meshes.

It does work to render assets from nvdiffmodeling in another render. Fig 1, 7, and 10 in our paper https://arxiv.org/pdf/2104.03989.pdf are all exported meshes and materials from our approach rendered in a path tracer.

There are a few things to consider when you move to another renderer

Lucklycat commented 1 year ago

I would recommend to start with a setup similar to https://github.com/NVlabs/nvdiffrecmc#use-the-extracted-3d-models-in-blender to get it right in Blender. That said, we did not release a blender import script for diffmodeling, so the nvdiffrecmc version may differ a bit. In diffmodeling, we do not capture lights, only materials and meshes.

It does work to render assets from nvdiffmodeling in another render. Fig 1, 7, and 10 in our paper https://arxiv.org/pdf/2104.03989.pdf are all exported meshes and materials from our approach rendered in a path tracer.

There are a few things to consider when you move to another renderer

  • Verify that tangent space is consistent between the renderers (needed for the normal map to look correct)
  • The base color texture is in sRGB, the ks texture is stored linearly, same for the normal map.
  • In diffmodeling, we train with a single random point light. It should generalize to novel lighting, but may not look exactly the same, depending on light placement, intensities, etc,

Thank you for your reply @jmunkberg There is still an incomprehensible point here. There is a difference between the map from nvdiffrecmc and the map from the same target in diffmodeling. diffmodeling has include more noise, the configs setting is same. as rhe example:https://github.com/NVlabs/nvdiffmodeling/issues/24#issuecomment-1413620777

You told me that it was due to mipmap. I checked the source code of two projects, and the two projects should have tried mipmap. So I don't know what causes the big difference in texture? Is it because noise reduction is used in nvdiffrecmc?

jmunkberg commented 1 year ago

The shading model of nvdiffmodeling and nvdiffrecmc are different. During optimization:

My point about mip map is mainly that you should optimize for the same conditions as you want to look at the model in. For example, if you want to render the model at 1024x1024 resolution, it is ideal to optimize for views of the model at similar resolutions. See Fig 8 in https://arxiv.org/pdf/2104.03989.pdf for an example of the quality difference when optimizing at two different resolutions.

Lucklycat commented 1 year ago

Thank you for your reply @jmunkberg

In fact, the most fundamental question is why there are so many noises in ks, kd and kn from nvdiffmodeling.

Your previous reply said that because during training, according to the given training parameters, the high-resolution map initial hash parameters were set, but during actual training, this part was not trained, so noise was generated?

So according to what you said, I will Res and train If res is set to the same value, 512 the problem can be solved. But after I try, the textue image results are still as follows. image

The texture sampling algorithm you mentioned should be the same. I think the texture sampling part in the source code is in: image Therefore, the effect of importing the source code into blender is not ideal because there is no linear map-liner filtering algorithm in blender? So if I add this part of the texture function: image If max_mip_level set levels to 0, can solve problem?

As you said above, in the paper, Figure 1,7,10 applied the results to render. What software is used for the render facade here? Unreal Engine Paragon asset?