Open nikprt opened 1 year ago
Hello, @nikprt. I am having the same questions as you.
I'm inspecting the COLMAP results in visualize_colmap.ipynb and I'm getting the following results.
In the synthetic lego sequence (lego.mp4), we obtained very accurate camera pose estimations, which can be seen in the following image:
For my self-captured video sequences, I have these estimations, which are clearly erroneous:
In #124, @mli0603 said:
Neuralangelo and most NeRF methods will require dense camera coverage similar to taking photos of an object in a hemisphere. In your case, walking circularly around the scene will generally make the results better.
So, maybe in your case, it can help. That's what i'm going to try in the next few days.
so in other words, don't expect horizontal pan videos to generate as a flat mesh just yet. Personally, my fingers are crossed.
so in other words, don't expect horizontal pan videos to generate as a flat mesh just yet. Personally, my fingers are crossed.
Exactly, horizontal panning videos don't seem like a good choice.
Results I've tried a lot of things in the last few weeks and I'll share them here:
We use an .obj file representing a factory with a piping system. This object was reconstructed using LiDAR sensors, and we also generated a synthetic video where the camera revolved around this factory.
As commented by the authors, there is still the problem of occlusion and homogeneous regions in the video. And this is evident in the reconstruction I obtained, in which you can see these clouds that look like marshmallows, but it's the first decent result I've gotten.
COLMAP sphere
Reconstructed object
Where can I find the requirements for a video I want to give as input for NeuralAngelo? Basically trying to run NeuralAngelo in the colab notebook with a custom video instead of the "lego.mp4".