Closed mfaisal59 closed 5 years ago
I see, the relevant numbers are .288 vs .313. That's not part of the standard hpatches benchmark (which is patch-based), so we ran it independently with different settings, even though we both used OpenCV. We probably used different image sizes and keypoint numbers, and maybe even sampled the image pairs differently (I think we did first vs rest in every sequence; I'm not sure I'm reading the SP paper correctly but they might've done all possible pairs?).
We should've been more explicit here, but we ran this last-minute for the appendix. The point of this experiment was just to show how learned methods perform against hand-crafted methods w.r.t. the inlier threshold.
Hi,
The matching score of SIFT in table 6 of your paper is different from the one reported by SuperPoint authors in Table 4 of their paper, and is there anything that I am missing?
LFNet:
SuperPoint: