Closed morsingher closed 2 years ago
In my experiments, using gradient directly is not good enough. I saw some other works explicitly create another branch to predict the normal and shows better results.
It may seem strange to perform volume rendering on normals, but I have seen many other quantities computed in this way, like optical flow and semantic segmentation class. But I think you need to add another constraint to minimize the entropy along rays (so that the density is peaked at one place) to get a more reasonable explanation of the quantity.
Hi, thanks for the answer. Just a quick follow-up question for clarifying further. Normals after volumetric rendering are expressed in world coordinates, right? I think so, as points themselves should be expressed in world coordinates.
Hi, I have noticed that in one of your branches you add a normal consistency loss (as in UNISURF and other works). I have two questions:
Thank you in advance for the help!