facebookresearch / silk

SiLK (Simple Learned Keypoint) is a self-supervised deep learning keypoint model.
GNU General Public License v3.0
644 stars 58 forks source link

About the results #36

Closed Ashespt closed 1 year ago

Ashespt commented 1 year ago

Thanks for your great work! I have tried to train a model on COCO dataset and tested on HPatches. Almost all results are 0.02-0.03 lower than the results in paper. By the way, the results I reproduced are still SOTA. Could you give some suggestions about this gap?

gleize commented 1 year ago

Hi @Ashespt,

We use a random seed instead of a fixed seed, which could explain the small difference. Since our training pipeline relies on a lot of stochastic steps (random initial weights, random homographies, random image augmentations, random samplng, etc), this could explain why your model converged to a slightly worse local optima.

It can also happen in the reverse direction. For example https://github.com/facebookresearch/silk/issues/26#issuecomment-1637526492 found better results that we reported.

If you need to get the best performing model, I would suggest you run multiple trainings and keep the best model.

Ashespt commented 1 year ago

Thanks! I'll try it.

100656cyx commented 20 hours ago

hi i know that you have reproduced the code successfully but i have a problem to train it (https://github.com/facebookresearch/silk/issues/72#issue-2698645317)

would you like offer me your code and environments which may help me a lot or can we talk about this ? thank you very much