Open Woo-Seok-Kim opened 1 year ago
Thanks for your interest! Before being fed into the auxiliary plane module network, the view direction is expanded to have the same shape as sampled points. Then we can use the same equation as NeRF to calculate the rendering weights.
Thanks for your kind reply. After expanding view direction, is it concatted with sampled point’s position or something?(I thought that view direction does not change along a ray, so if only view direction is fed to network it will lead to same outputs of all points, and same density of all points in same ray)
Thanks for great work! I'm also interested in reconstructing 3D scene with glass reflection. I'd like to ask some questions about NeuS-HSV's auxiliary plane module(Sorry if I misunderstood your explanation).
In Section 3.3, weights of auxiliary plane is calculated by using volume density σ, which is output of auxiliary plane module network. But as described in Figure 5(a), auxiliary plane module network has view direction v as input and {volume density σ, distance between camera center and plane, plane normal} as output, so volume density σ is not calculated for each sampled points along plane path. But in equation(6), it seems that plane path's weight is calculated using same equation as NeRF, by using each point's density. How do you get density of each sampled points from auxiliary plane module network?