Haian-Jin / TensoIR

[CVPR 2023] TensoIR: Tensorial Inverse Rendering
https://haian-jin.github.io/TensoIR/
MIT License
237 stars 12 forks source link

Limitation on glossy materials #13

Closed wonjunior closed 9 months ago

wonjunior commented 1 year ago

Thank you again for this insightful research project and for providing us with the source code.

I was trying to track down an issue related to glossy estimation. I did a simple test where I optimize a glossy scene and then render novel views by setting the roughness value (after volume rendering is done) such that every 800x800 pixels has a roughness of 0. I tried other values but it seems that values lower than 0.5 cause issues. Below, I am showing results for different r values:

I did an ablation for r=0.2, the strange highlights seem to be coming from the directly lit specular component: image

Do you know why there happens to be artefacts for r=0.2 ; while r=0 looks very matte? Let me know if I overlooked something.

Haian-Jin commented 1 year ago

Hi, thanks for your exploratory experiments! Can you give me more details about your experiments? For example:

  1. what is the motivation for those experiments?
  2. Do you mean you try to fix the roughness values while predicting the albedo values and environmental lighting?
  3. How do the GT input images look like?

I did notice you sent me an email several days ago. We can discuss your questions here or via private emails, depending on which one you prefer.

Best, Haian

wonjunior commented 1 year ago

Hi Haian, Thank you for your reply! Concerning the motivation behind this test, I was trying to have a minimal example to find out the limiting factor that prevented TensoIR from predicting (very) glossy materials (in other words material rendered in Blender with a roughness of 0). image

Answering to 2., I optimize TensoIR on this glossy ball scene, then perform novel-view rendering, after the decomposition has been inferred and volume rendering is done (all 800x800 maps are constructed), I overwrite the roughness to values such as 0 or 0.2 to see what is the produced render.

So for r=0, I would expect a very shiny object reflecting the environment (just as seen in the gt above), and something slightly blurrier for 0.2 . Trying to analyze the result obtained on r=0.2, could it be an issue with how the SG is obtained, and the illumination computed?

Haian-Jin commented 1 year ago

I

Hi Haian, Thank you for your reply! Concerning the motivation behind this test, I was trying to have a minimal example to find out the limiting factor that prevented TensoIR from predicting (very) glossy materials (in other words material rendered in Blender with a roughness of 0). image

Answering to 2., I optimize TensoIR on this glossy ball scene, then perform novel-view rendering, after the decomposition has been inferred and volume rendering is done (all 800x800 maps are constructed), I overwrite the roughness to values such as 0 or 0.2 to see what is the produced render.

So for r=0, I would expect a very shiny object reflecting the environment (just as seen in the gt above), and something slightly blurrier for 0.2 . Trying to analyze the result obtained on r=0.2, could it be an issue with how the SG is obtained, and the illumination computed?

I see.

I think there are two key reasons that cause TensoIR's bad performance on very glossy materials:

  1. For the incident lighting directions, I use stratified sampling to randomly sample 512 lighting directions (and half of them will be filtered according to normals directions). One problem is that for those very glossy materials whose roughness values are zero, their shiny colors come from one particular incident lighting direction that is symmetric along the normal direction. However, that direction is hard to be sampled accurately when we use stratified sampling. I think to handle very shiny objects, the stratified sampling must be replaced by some other Monte Carlo sampling methods. You can also try to borrow some ideas from the paper of ref-nerf.
  2. Environmental maps represented by SGS can not represent the high-frequency environmental lighting details as shown in the reflections of the GT images.

I personally think that Reason 1 is more important. Happy to continue this conversation.

wonjunior commented 1 year ago

Hi, thank you so much for your detailed answer. I ran the relighting code with IS from the ground truth env map and the highlight showed up much better.

I looked into importance sampling you in your relightning script. You use a pdf derived solely from the intensity (scaled by the solid angle). Do you think rendering glossy objects would require a more advanced sampling, as in NeRO for example, where they do cosine-weighted and NDF sampling for diffuse and specular components, respectively?

Other than that, I still am puzzled as to why all reflections disappear when the roughness value is lower than 0.05. I checked the rendering equations but they match the analytical formulae...

Haian-Jin commented 1 year ago

For the first question, I agree a more advanced sampling is necessary, but you may need to try different sampling methods and observe their results. A sampling method that works well for rendering may not work well for inverse rendering reconstruction.

For the second question, I think the problem is still the sampling. Maybe you can try the lighting intensity importance sampling in the relighting script when setting the roughness value to be very low.