mapillary / OpenSfM

Open source Structure-from-Motion pipeline
https://www.opensfm.org/
BSD 2-Clause "Simplified" License
3.41k stars 861 forks source link

asking for an advice about merging two reconstructions #279

Open ywpkwon opened 6 years ago

ywpkwon commented 6 years ago

First of all, thanks @paulinus for sharing your great work! May I ask your advice for an use case?

Let's say I have a linear dataset of 500 images (taken from a driving car). I was able to reconstruct them in an incremental manner (i.e., adding one view, triangulating it, doing bundle, and repeating.). However, it currently takes almost 4-5 hours. (Although computation time does not matter much for now, it takes too much. From an earlier discussion (#130), I learned that there are local_bundle_radius or bundle_interval options.)

Do you think if I can reconstruct once for a subset 1 of (earlier) 250 images and once for a subset 2 of (later) 250 images, and nicely merge them? Between their boundary, how can I calculate relative {R, T} (e.g., between the last camera of subset 1, and the first camera of subset 2).

Alternatively, do you think if I can subdivide the set such as [0-251] and [250-500] (with overlap), figure out the relative {R, T} between 250th and 251th, and apply it to all the second subset? (what if a scale is different?)

Would there be a better way to do it when using your library? Could you provide some pseudocodes/logics/hint for that?

paulinus commented 6 years ago

hi @ywpkwon,

For 500 images the simplest is to run a single reconstruction. Running local bundle adjustments helps in keeping the runtime reasonable. For some reason though, it is currently way slower than it should. I can take up to 10s per iteration even when only a few cameras are being optimized. There is a numerical issue that I don't understand yet that makes local bundle adjustment converge slowly than the global one. Even with that, it is faster than running the global one as soon as there are 100+ images.

The other option, as you say, is to split the dataset and run smaller reconstructions: http://opensfm.readthedocs.io/en/latest/large.html For 500 images, it is probably overkill though.