NagabhushanSN95 / SimpleNeRF

Official code release accompanying the paper "SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with Simpler Solutions"
https://nagabhushansn95.github.io/publications/2023/SimpleNeRF.html
MIT License
10 stars 4 forks source link

Issues with Depth Estimation on 360 Dataset with Sparse Views #3

Closed UranusITS closed 5 months ago

UranusITS commented 7 months ago

Thank you for your great work.

I am trying to train SimpleNeRF on Mip360 dataset with 5 cameras. The dataset is in LLFF format but with significant variations in camera poses. Therefore, COLMAP cannot provide depth estimation with a sparse output of points3d.

Could you suggest any modifications to the COLMAP process or an alternative approach to handle such variations in camera poses, especially for sparse view datasets in a 360 setting?

Your insights would be invaluable to my research.

Thank you for your time.

NagabhushanSN95 commented 7 months ago

Hi, thanks for your interest in our work. As far as I understand it, colmap or nerf or any model without pre-training requires significant overlap between the views to be able to learn. So, I think 5 views is too small for 360 videos. Can you try using 8 views?

UranusITS commented 7 months ago

Thanks for your suggestion of using more views for training on the Mip360 dataset to address the issue of sparse depth estimation. I'm conducting similar research and plan to use your research code for comparative experiments. Given that our experiments involve using 5 views and even fewer in some cases, do you think it would still be feasible to conduct comparative experiments between our setups with different view counts? Thank you for your input.

NagabhushanSN95 commented 7 months ago

In that case, you can try using our model without the colmap sparse depth supervision. You can also use any other sparse input NeRF as the baseline instead of DS-NeRF i.e. add our augmentations on some other sparse input NeRF.