Open Livesoso opened 1 year ago
Hi @Livesoso
In our experiments, we observed that LightGlue is generally better than SuperGlue on all training scenes except for Dioscuri, where SuperGlue is slightly better (but by 2-4% max, not 20%). Our reasoning is that the relative positional encoding makes LightGlue learn the data distribution more effectively. And in the training dataset which we used (MegaDepth), in-plane rotations are non-existent, while they dominate on Dioscuri. However, these rotations can easily be fixed, e.g. from the EXIF data in the image, or with a deep network.
Here some results on Dioscuri with a very simple baseline (just hloc, netvlad top50 and SP with 4K keypoints):
SP+SG: 0.525 mAA SP+LG: 0.499 mAA SP+SG-rot: 0.670 mAA SP+LG-rot: 0.686 mAA
There are also many other cool solutions to the in-plane rotation problem on kaggle, so be sure to check them out!
Thank you very much! I will try more.
Hello,I want to confine some parameters about superpoint and lightglue . When using sp+lg ,the best parameters is superpoint "nms_radius": 8, "max_num_keypoints": 4096, "detection_threshold": 0.000,
lightglue 'depth_confidence': 0.95, # early stopping, disable with -1 'width_confidence': 0.99, # point pruning, disable with -1 "filter_threshold": 0.1,
Thank you very much for you works I have some questions about IMC2023. I want to turn my pipeline with your Lightglue, my feature matching is superpoint and superglue which i get 0.65 in heritage_dioscuri scence But i turn to superpoint and lightglue ,the scores is 0.48 ,i am very confused with the results beacuse the large decline . It is Strange beacuse in other scences the scores improved. The two ways have the same settings with resized to 1600 and the number of superpoint is 2048
Thank you