Closed KopanevPavel closed 2 years ago
Yeah, you‘re right! For unseen views, we still need depth priors because the network only learn the neural radiance fields around depth priors' regions. Actually, utilizing depth prior from the nearest seen view is an approximation. A better solution may be to use training views to generate mesh and project it to novel views to generate depth priors, which leaves for the future work.
Hello:) Thnx for sharing the code. Could you pls clarify: if we forward pass already learned scene to render views from poses of some random trajectory (e.g. circle), do we need depth priors?
Asking this because of this line from render_path() function in src/runnerf.py: rgb, disp, acc, depth, = render(H, W, focal, depth_priors=depth_priors[i], depth_confidences=depth_confidences[i], chunk=chunk, c2w=c2w[:3,:4], **render_kwargs)
And for poses with unseen view you utilize depth prior from the nearest seen view, correct?