Closed Ashespt closed 1 year ago
Hi @Ashespt,
We use a random seed instead of a fixed seed, which could explain the small difference. Since our training pipeline relies on a lot of stochastic steps (random initial weights, random homographies, random image augmentations, random samplng, etc), this could explain why your model converged to a slightly worse local optima.
It can also happen in the reverse direction. For example https://github.com/facebookresearch/silk/issues/26#issuecomment-1637526492 found better results that we reported.
If you need to get the best performing model, I would suggest you run multiple trainings and keep the best model.
Thanks! I'll try it.
hi i know that you have reproduced the code successfully but i have a problem to train it (https://github.com/facebookresearch/silk/issues/72#issue-2698645317)
would you like offer me your code and environments which may help me a lot or can we talk about this ? thank you very much
Thanks for your great work! I have tried to train a model on COCO dataset and tested on HPatches. Almost all results are 0.02-0.03 lower than the results in paper. By the way, the results I reproduced are still SOTA. Could you give some suggestions about this gap?