We present a novel method to regularizes neural radiance field (NeRF) in few-shot setting with geometry-based consistency regularization. The proposed approach leverages NeRF's rendered depth map to warp source images to unobserved viewpoints and impose them as pseudo ground truths to facilitate learning of detailed features. By encouraging consistency at feature-level instead of using pixel-level reconstruction loss, we regularize the network solely at semantic and structural levels while allowing view-dependent radiance to model freely after color variations. Our application of proposed consistency term for the network is twofold: between and observed and unobserved viewpoints, image rendered at unseen view is forced to model after the image warped from input observation, while between observed viewpoints the warped image undergoes optimization for geometry-specific regularization. We also demonstrate an effective method to filter out erroneous warped solutions, along with relevant techniques to stabilize training during optimization. We show that our model achieves competitive results compared to concurrent few-shot NeRF models.
🔑 Key idea:
It leverages a rendered depth map at unobserved viewpoint to warp sparse input images to the unobserved viewpoint and impose them as pseudo ground truths. This work is mainly for the few-shot setting in the NeRF problem.
💪 Strength:
To deal with sparse images for a scene, they proposed a kind of self-supervised training method.
😵 Weakness:
Is the few-shot learning setting critically important?
🤔 Confidence:
Low
✏️ Memo:
Currently, the official review scores are 8855. Likely to be accepted.
Neural Radiance Fields with Geometric Consistency for Few-Shot Novel View Synthesis
Anonymous et al., ICLR 2023
🔑 Key idea:
💪 Strength:
😵 Weakness:
🤔 Confidence:
✏️ Memo: