cvg / glue-factory

Training library for local feature detection and matching
Apache License 2.0
756 stars 98 forks source link

Questions about training data #33

Closed ACuOoOoO closed 11 months ago

ACuOoOoO commented 1 year ago

I have been looking forward to training code of LightGlue and SuperGlue for a long time. I can't wait to reproduce this work. However, the requirement of training data (~900G) is too large. So I want to ask:

Firstly, LightGlue claims there is no major difference between homography pretraining on Megadepth and Oxford-Paris 1M. How large is the different exactly? Can I just performance homography pretraining on Megadepth?

Secondly, is there a major difference between the linked preprocessed Megadepth and the one provided by previous works, e.g., DISK's? I have official raw data and preprocessed data provided by DISK of Megadepth. I don't want to another Megadepth.

Phil26AT commented 1 year ago

Hi @ACuOoOoO

Yes this is definitely doable and you should expect very similar results. You can just adapt the paths in the homographies dataset to MegaDepth, and create a .txt files with the names of all the images you want to use.

No, it is almost identical, we just changed paths of depth images and the scene_info files (you can download them manually here). So just download the scene_info files and change the depth_subpath.