rpautrat / SuperPoint

Efficient neural feature detector and descriptor
MIT License
1.88k stars 416 forks source link

MagicPoint evaluation difference #244

Closed Jinjing98 closed 2 years ago

Jinjing98 commented 2 years ago

Hello, rpautrat

I have some confusion regarding the evaluation result of MagicPoint. It would be great if you could help me out!

So I tried to continuously train the MagicPoint model with the synthetic dataset using your pre-trained model "mp_synth-v11" with below command: python experiment.py train configs/magic-point_shapes.yaml my_mp --pretrained_model mp_synth-v11

I expected that I could get decent precession in the early training stage(as in the paper, ~ 0.97), however, the log info (attached below) shows that I only get ~0.5 precession. What could be the reason behind this?

[01/23/2022 16:20:24 INFO] Iter 0: loss 0.1091, precision 0.5430, recall 0.6353 [01/23/2022 16:20:28 INFO] Iter 10: loss 0.0985, precision 0.4483, recall 0.5762 [01/23/2022 16:20:32 INFO] Iter 20: loss 0.1324, precision 0.4742, recall 0.6047 [01/23/2022 16:20:35 INFO] Iter 30: loss 0.0843, precision 0.5142, recall 0.6054

I am also confused about the parameter "eval_iter" in magic-point_shapes.yaml, I thought the evaluation should only be carried out once after each epoch rather muti times, so what this parameter is for? I traced the code and noticed it seems to be related to the 'total' parameter of tqdm function, which doesn't represent the iteration times of evaluation operations?

Thanks again for your reading!

rpautrat commented 2 years ago

Hi,

The precision and recall reported during training are different metrics than the final ones computed at test time. They are a simpler but faster per-pixel version of the precision and recall and are only here to have a rough idea of the progress of the training. So I think that your current precision and recall are fine.

If you want to get the mAP as shown in the paper, you can use the notebook https://github.com/rpautrat/SuperPoint/blob/master/notebooks/detector_evaluation_magic_point.ipynb

'eval_iter' is only used at evaluation time to determine the maximum number of images to evaluate on, so you can safely ignore this parameter during training.

Jinjing98 commented 2 years ago

Thanks very much! It is really helpful:)