kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV
MIT License
2.74k stars 483 forks source link

Computing normals from NeRF #149

Closed morsingher closed 2 years ago

morsingher commented 2 years ago

Hi, I have noticed that in one of your branches you add a normal consistency loss (as in UNISURF and other works). I have two questions:

  1. Did you notice any difference in computing normals with finite differences and with autograd?
  2. Does it make sense to perform volumetric rendering on normals? Another possibility would be to compute the normal for pixel p at the corresponding 3D point x, which is x = o + td, where t is the rendered depth. What do you think about this?

Thank you in advance for the help!

kwea123 commented 2 years ago

In my experiments, using gradient directly is not good enough. I saw some other works explicitly create another branch to predict the normal and shows better results.

It may seem strange to perform volume rendering on normals, but I have seen many other quantities computed in this way, like optical flow and semantic segmentation class. But I think you need to add another constraint to minimize the entropy along rays (so that the density is peaked at one place) to get a more reasonable explanation of the quantity.

morsingher commented 2 years ago

Hi, thanks for the answer. Just a quick follow-up question for clarifying further. Normals after volumetric rendering are expressed in world coordinates, right? I think so, as points themselves should be expressed in world coordinates.