ahojnnes / local-feature-evaluation

Comparative Evaluation of Hand-Crafted and Learned Local Features
226 stars 48 forks source link

RANSAC parameters for geometric verification #1

Closed lzx551402 closed 7 years ago

lzx551402 commented 7 years ago

First of all thank you for your excellent work. I was wondering if you could provide your RANSAC configuration for evaluation? I didn't find them either in the original paper or this repo, or are you using the default configuration? I'm asking this because I find default configuration fails to discard many visually unreliable inlier image pairs, and very strangely it seems to me 'min_inlier_ratio' is not working at all - pairs that should be discarded by this ratio test still appear in the inlier pair list. Is it a COLMAP issue or am I doing something wrong?

Thank you very much.

ahojnnes commented 7 years ago

Hi, you should use the default parameters in COLMAP. No changes needed. I will highlight this more in the benchmark instructions. The min_inlier_ratio parameter determines the maximum number of RANSAC iterations and not whether an image pair is overlapping or not. The value that determines whether an image pair is overlapping or not is the min_num_inliers parameter.

Note that outlier matches and outlier image pairs will be filtered out during the reconstruction stage.

Let me know whether this answers your question. I will close this issue as resolved, but feel free to reopen.

lzx551402 commented 7 years ago

Thank you very much, that would be clear to me. May I ask one more question about counting inlier pairs/matches? If I'm not wrong they are counted after sparse reconstruction, and only registered images are taken into account. If so, addtional programming is needed to parse the COLMAP output rather than use script 'export_inlier_matches.py' in the COLMAP repo, am I right?

ahojnnes commented 7 years ago

The inlier pairs/matches are counted before reconstruction. This metric thus contains outliers and the number of sparse points, registered images, etc. is a more precise measure after the reconstruction. I didn't have time to prepare all the code before the conference, but over the next weeks I will prepare scripts that will automatically extract all evaluation metrics.