alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.21k stars 1.09k forks source link

Settings for reconstructing camera path #846

Open BLAHBLAHneeb opened 4 years ago

BLAHBLAHneeb commented 4 years ago

Hey, so I'm pretty new with photogrammetry but have had some success with meshroom re-creating stationary objects. Found a video online of a filmmaker using meshroom to re-create a tracked camera with geometry of a scene, and I thought it'd be fun to experiment with.

On my Onewheel using my iphone 11, I recorded 1080p 30fps footage of my empty college campus (I know footage isn't ideal for reconstruction, but this mainly served as a test). I rode along a long path, being sure to have consistent features in frame for the software to track and mitigating as much motion-blur as I could. I extracted screenshots from the footage (every five frames) and put them into meshroom. For the first test I kept all the settings on default, set feature extraction describer preset to 'high', and only used the first 301 images. This reconstructed the scene (with every camera reconstructed) thoroughly enough where I could see the path and construct some good geometry. Perfect test!

Meshroom_2020-04-01_18-50-08

With this test successful I wanted to see if I could reconstruct the full path (with almost 6000 images). This proved much more difficult with my inexperience of the software as I've ran into a few issues on all my attempts. The overall problem is that the scene will either ignore images (despite them working in the smaller tests) or the software won't reconstruct past 400 images. I've tried using the "lock previously constructed" node in conjunction with augment reconstruction but had no luck. Looked at the documentation but couldn't really find a solution. My project isn't being properly reconstructed at the structurefrommotion node(s) of the pipeline.

My first question is if this is possible inside meshroom? To reconstruct an entire path of almost 6000 images. If possible, what is the ideal node setup and settings? Am I missing a magical structurefrommotion or augmentation reconstruction setting which would fix my problem? If my current settings should be working, I can try again with higher-resolution frames or change the way I record the path.

Dataset Here are some pictures of my dataset, all were reconstructed in the successful test:

image_000013 image_000102 image_000179 image_000229 image_000301

The rest of the images/environment look roughly the same, mainly extracting features from passing rocks, walls, or pavement cracks.

Desktop

natowi commented 4 years ago

The problem with this kind of dataset is the linear capturing path. As you have only one camera and move only in one direction, only matches from a few frames have matching features. The Matching algorithm is comparing all images to each other to find matches. But due to the nature of your dataset, only a small portion of images have matching features to each other.

I think there was a similar Issue some time ago. https://github.com/alicevision/meshroom/issues/662

btw: https://github.com/alicevision/meshroom/wiki/Reconstruction-from-videos

BLAHBLAHneeb commented 4 years ago

Oops, my bad. New to Github.

BLAHBLAHneeb commented 4 years ago

The problem with this kind of dataset is the linear capturing path. As you have only one camera and move only in one direction, only matches from a few frames have matching features. The Matching algorithm is comparing all images to each other to find matches. But due to the nature of your dataset, only a small portion of images have matching features to each other.

Gotcha. I figured that's what was happening. I'm currently doing one more pass to see if I can brute force it with ultra settings. I also knew about the keyframes from video node, I just didn't want to convert MOV to MP4 :P Thanks for the reply!

fabiencastan commented 4 years ago

@BLAHBLAHneeb Have you done an augmentation from your successful test with 300 images? If yes, what are the results? Does it decrease the quality of the first one??

What I would recommend for large datasets (>1000 images) is to do a first SfM with much more strict parameters, like "Min Input Track Length" to something high like 20 (depending of course on the density of your shooting). Then add another SfM connected to the first one, with the default parameters. You can still be a bit more strict on "Min Observation For Triangulation"=4, "Min Angle For Triangulation"=5, "Min Angle For Landmark"=5.

fabiencastan commented 4 years ago

ULTRA is there for small datasets without enough connection between images or challenging surfaces but not for very large datasets! In your case, the scene seems to be textured enough. HIGH could improve but not ULTRA. You have more chance to improve your results by adjusting the SfM parameters as explained in my previous message.

BLAHBLAHneeb commented 4 years ago

ULTRA is there for small datasets without enough connection between images or challenging surfaces but not for very large datasets! In your case, the scene seems to be textured enough. HIGH could improve but not ULTRA. You have more chance to improve your results by adjusting the SfM parameters as explained in my previous message.

Gotcha. I'll try experimenting with the SfM settings you specified. The Ultra settings are definitely overkill and won't be done in any reasonable amount of time.

Also I did not do an augmentation with the first test specifically. I've manly done a bunch of different projects with different settings for each. I'll try adding to that first test 300 images at a time to see if it builds upon the first. Thanks for the reply!