Open zhangmozhe opened 2 years ago
I just solve the problem by rerunning the feature extraction with the same code of the current repo. Another question is whether it is possible to set ``Mapper.ba_local_max_num_iterations'' with the distributed_mapper. Do I need to recompile the code or provide the argument?
Look forward for the reply. Thanks!
Sure, you can set this option. Note that the constructor of the DistributedMapperController, IncrementalMapperOptions& mapper_options
is passed as a parameter. However, after checking the code at line 828-891, minor modification is needed for the code works : you have to add one more line of code after line 882: options.AddMapperOptions();
. After this, try to recompile the code and it should work.
Thanks for your help!
@AIBluefisher After running the code, I found another issue. Since I use a fixed camera ( reference camera) along with 4 moving cameras during the data capture, 1/5 of the images should share the same extrinsics. Is this possible to set this prior during the colmap running? Is there any workaround?
Thanks for your help very much!
Seems very interesting. I may suggest two methods: (1) as one camera is fixed all the time, we can deem the observations (u_k0, u_k1, u_k2, ...) w.r.t. different frames of this camera as all the observations({u_k0 + u_k1 + u_k2 + …}) w.r.t. only one frame. (2) we can alternatively add extrinsic constraints to all frames captured by this fixed camera during BA, e.g. \min \sum_k Pk P{k+1}^{-1}. I think the first one should be enough but the implementation might be trivial, the latter one should also work but can introduce more computation burden.
Thanks for the suggestion. How to implement option 1? One way I can think of is to use exhaustive matching for a subset of non-reference images with the chosen reference frame. Then I can add the rest non-reference images for sequential matching. This is because I find the colmap supports to resume the matching:
Another way I can think of is to use custom matching to specify the paired images for reference frame, like this: ref.png frame1_1.png ref.png frame1_2.png ref.png frame1_3.png ref.png frame1_4.png ref.png frame2_1.png ref.png frame2_2.png ref.png frame2_3.png ref.png frame2_4.png ... And then match the other images.
Do you think these two ways are doable? Is there any suggestion for the implementation?
Sure, I think you can do both of the two ways. Since the reference frames are easy to identify by the camera id, both methods should be easy to implement. After that, for the reference frames, we can collect all the observations and matches together, but we need to reindex the keypoints.
Hi, thanks for this great work. When I try the sequential mode, I come across the following issue:![image](https://user-images.githubusercontent.com/10203551/155885568-54623f59-d94f-4ce5-8a2c-fa2f8f3e0066.png)
How can I solve it? Thanks!