zju3dv / LoFTR

Code for "LoFTR: Detector-Free Local Feature Matching with Transformers", CVPR 2021, T-PAMI 2022
https://zju3dv.github.io/loftr/
Apache License 2.0
2.31k stars 362 forks source link

Provide additional details on HPatches evaluation #136

Open Parskatt opened 2 years ago

Parskatt commented 2 years ago

Hi, I'm interested in how you calculated the AUC for homographies on HPatches exactly.

Related: #105

zehongs commented 2 years ago

Hi, I've checked the realated issue. We use the implementation of AUC from SuperGlue. Here's the code in our repo: https://github.com/zju3dv/LoFTR/blob/94e98b695be18acb43d5d3250f52226a8e36f839/src/utils/metrics.py#L151-L154

And as stated in the paper, the error term is defined as error = l2norm(homography_projection(corners, H_gt), homography_projection(corners, H_estimate)). To compute the H_estimate, we use cv2.findHomography(src_pts, tgt_pts, cv2.RANSAC, 3.)

Parskatt commented 2 years ago

Thanks for the response.

From looking at your code your thresholds are always set to [5,10,20]. In the paper you report [3,5,10]. Is [5,10,20] the actual thresholds you used?

If you have the time, could you try to help reproducing the results in this repo?

https://github.com/GrumpyZhou/image-matching-toolbox/issues/20

Parskatt commented 2 years ago

There these results were gotten (albeit with ransac threshold 2px)

Opencv with ransac threshold = 2 at error thresholds [3, 5, 10] px

SuperPoint: 0.37 0.51 0.68 SuperPoint+SuperGlue: 0.39 0.53 0.71 CAPS (w.SuperPoint): 0.33 0.49 0.67 LoFTR (all matches): 0.48 0.6 0.74

Parskatt commented 2 years ago

The results seem to align with your own, but with all methods scoring lower, so there seems to be some discrepency.

Parskatt commented 2 years ago

From your paper: image

Should this be interpreted as that you also scale the homography?

zehongs commented 2 years ago

Yes, this follows Sec 7.3 of the SuperPoint paper. By scaling the images and then comparing the corner AUCs, the evaluation results are consistent across all image pairs. Otherwise, the evaluation can easily deteriorate with large image inputs. You can also replace the pixel-level threshold with normalized pixel-level threshold.

TruongKhang commented 2 years ago

Hiiiii @zehongs, I'm also trying to reproduce the result on HPatches. In your last comment above, did you mean that you scaled the ground truth homography to evaluate on the scaled images? Or did you do like this: you scaled the images and estimated the matches. After that, you re-scaled the matches back to the original resolution and perform the evaluation with the original ground truth homography.

georg-bn commented 2 years ago

@zehongs We have been able to reproduce the numbers in https://github.com/GrumpyZhou/image-matching-toolbox/issues/20 , if you could comment on/clarify the below points it would be very helpful:

Thank you!