AIBluefisher / DAGSfM

Distributed and Graph-based Structure from Motion. This project includes the official implementation of our Pattern Recognition 2020 paper: Graph-Based Parallel Large Scale Structure from Motion.
https://aibluefisher.github.io/GraphSfM/
BSD 3-Clause "New" or "Revised" License
395 stars 84 forks source link

Sequential mode error #57

Open zhangmozhe opened 2 years ago

zhangmozhe commented 2 years ago

Hi, thanks for this great work. When I try the sequential mode, I come across the following issue: image

How can I solve it? Thanks!

zhangmozhe commented 2 years ago

I just solve the problem by rerunning the feature extraction with the same code of the current repo. Another question is whether it is possible to set ``Mapper.ba_local_max_num_iterations'' with the distributed_mapper. Do I need to recompile the code or provide the argument?

Look forward for the reply. Thanks!

AIBluefisher commented 2 years ago

Sure, you can set this option. Note that the constructor of the DistributedMapperController, IncrementalMapperOptions& mapper_options is passed as a parameter. However, after checking the code at line 828-891, minor modification is needed for the code works : you have to add one more line of code after line 882: options.AddMapperOptions();. After this, try to recompile the code and it should work.

zhangmozhe commented 2 years ago

Thanks for your help!

zhangmozhe commented 2 years ago

@AIBluefisher After running the code, I found another issue. Since I use a fixed camera ( reference camera) along with 4 moving cameras during the data capture, 1/5 of the images should share the same extrinsics. Is this possible to set this prior during the colmap running? Is there any workaround?

Thanks for your help very much!

AIBluefisher commented 2 years ago

Seems very interesting. I may suggest two methods: (1) as one camera is fixed all the time, we can deem the observations (u_k0, u_k1, u_k2, ...) w.r.t. different frames of this camera as all the observations({u_k0 + u_k1 + u_k2 + …}) w.r.t. only one frame. (2) we can alternatively add extrinsic constraints to all frames captured by this fixed camera during BA, e.g. \min \sum_k Pk P{k+1}^{-1}. I think the first one should be enough but the implementation might be trivial, the latter one should also work but can introduce more computation burden.

zhangmozhe commented 2 years ago

Thanks for the suggestion. How to implement option 1? One way I can think of is to use exhaustive matching for a subset of non-reference images with the chosen reference frame. Then I can add the rest non-reference images for sequential matching. This is because I find the colmap supports to resume the matching: image

Another way I can think of is to use custom matching to specify the paired images for reference frame, like this: ref.png frame1_1.png ref.png frame1_2.png ref.png frame1_3.png ref.png frame1_4.png ref.png frame2_1.png ref.png frame2_2.png ref.png frame2_3.png ref.png frame2_4.png ... And then match the other images.

Do you think these two ways are doable? Is there any suggestion for the implementation?

AIBluefisher commented 2 years ago

Sure, I think you can do both of the two ways. Since the reference frames are easy to identify by the camera id, both methods should be easy to implement. After that, for the reference frames, we can collect all the observations and matches together, but we need to reindex the keypoints.