Closed oOXpycTOo closed 3 years ago
We directly use the depth priors from the nearest view without warpping. Actually, we have tried to warp the nearest views to the novel views. However, there exists many holes. A potential solution is to use training views to generate mesh and project it to novel views to generate depth priors, which leaves for the future work.
This is an interesting solution, however, shouldn't it produce slightly worse images because of the inconsistency of rendered image and prior depth?
We think the adaptive sampling range can alleviate this issue. Yeah, there still exists some black regions on rendered images caused by this inconsistency. Despite of this, the quality is still better than original NeRF.
Got it. Thank you for your responses!
Hi! Thank you for your work! It was said in the paper, that it's possible to render novel views by using adapted depth priors from the nearest view. However, nothing was said about how to deal with missed regions, in case we adapt these predicted priors (I guess, here we need to warp them). I also didn't manage to find any code doing that. Could you please clarify this point?