iamNCJ / NRHints

Official Code Release for [SIGGRAPH 2023] Relighting Neural Radiance Fields with Shadow and Highlight Hints
https://nrhints.github.io
MIT License
160 stars 12 forks source link

Why can the method obtain comparable results to Gao et al. using only 500 images? #1

Closed yxuhan closed 1 year ago

yxuhan commented 1 year ago

Hi, thanks for your work and code, the results are pretty impressive!

I'd like to know why the proposed method can obtain comparable results to Gao et al. using only 500 training images. I think both methods are trained in a scene-specific fashion and use the same kinds of hints. I wonder what specific design in the proposed method makes it more data efficient?

iamNCJ commented 1 year ago

Hi, thank you for your interest in our work!

You're right, both DNL [Gao et al. 2020] and our method are trained in a scene-specific fashion. However, DNL is neural-textured based, relying on a proxy geometry which is not learnable. Thus, DNL needs to rely on the 2D texture and the renderer to compensate for errors from geometry and camera poses estimation, requiring more data to fit the appearance. Our method, on the other hand, is based on a NeRF-based 3D representation, optimizing both geometry (density net) and appearance (color net) during training (our latest released version optimizes camera pose as well), which is more physical compared to DNL, having a better regulation, and requiring much fewer data.

This difference also leads to different behavior in handling camera errors, which is discussed in Chapter 4 of our paper.

yxuhan commented 1 year ago

Got it, thanks for your reply.