NeRF-SLAM: Real-Time Dense Monocular SLAM with Neural Radiance Fields. https://arxiv.org/abs/2210.13641 + Sigma-Fusion: Probabilistic Volumetric Fusion for Dense Monocular SLAM https://arxiv.org/abs/2210.01276
I was able to try it out and experience it firsthand thanks to you. Thank you.
The data called Replica seems to be depth data, but when executing slam mode, why is the path of this place given as well? In reading the paper, it seems that only monocular images are used as input, but there is no input data in the folder. Where can I find the input data?
Finally, can I run this directly with my own camera?
According to my research, the correct input for NeRF-SLAM is a monocular image. The reason user to input a depth image is to cover the case where the user uses the modified Instant-NGP algorithm (compared to the original repository), without using the --slam parameter in the run command
As for running directly with your own camera, I believe it is entirely possible if you provide all the following parameters: w, h, fl_x, fl_y. This would require you to write a script to support it.
Thank you for providing the code.
I was able to try it out and experience it firsthand thanks to you. Thank you.
The data called Replica seems to be depth data, but when executing slam mode, why is the path of this place given as well? In reading the paper, it seems that only monocular images are used as input, but there is no input data in the folder. Where can I find the input data?
Finally, can I run this directly with my own camera?
thank you!!!