Closed LHXhh closed 1 year ago
Nerf does not learn things that is not visible in the training cameras. So super large box meaning there are many places not being optimized by NeRF thus can have arbitrary color and density.
That being said a simple way is to just set an as-tight-as-possible box. A more sophisticated way is to identify which regions are not visible to all cameras and set the density there to zero.
Thank you for your answer. I need to think about it.
Thank you very much for your wonderful work, I have a question for you. When exporting the mesh, I need to give the scene boundary. When the given boundary is too large (-2, 2), the scene I want will be wrapped in a fog. Do you know how these foggy things come from? And a fake scene appears, mirror-symmetric to the real one. Like this![2](https://user-images.githubusercontent.com/102360620/236602077-f814e922-3230-499f-9265-318cf654f372.png)