Closed feymanpriv closed 5 years ago
Original comes from https://github.com/DagnyT/hardnet
It needs not to retrain, because affnet makes its life easier. If one retrains hardnet afterwards, I expect it becomes less robust. Moreover, it is an idea of modular local features - to be detector, shape and descriptor to be upgradable w/o retain.
@ducha-aiki Thank you for your reply! I think i have understood your idea. Affnet makes the detector more reliable and thus helps improve the final result. If retrain the hardnet, we still focus on the descriptor training and will lose the sense of your design of affnet.
By the way, how many features do you choose of the hessian detector in testing oxford? i find some samples are not able to extract 3000, the number in your matching example.
@ducha-aiki
@ym547559398 I haven`t used fixed number, instead used Hessian threshold as below to be fully compatible with previous experiments: https://dspace.cvut.cz/bitstream/handle/10467/9548/2009-Efficient-representation-of-local-geometry-for-large-scale-object-retrieval.pdf
@ducha-aiki I have tested your released model on roxford dataset, but the final result is not good as follows:
roxford5k: mAP E: 67.17, M: 53.37, H: 30.37 roxford5k: mP@k[ 1 5 10] E: [95.59 72.06 55.88], M: [97.14 81.43 62.86 ], H:[78.57 31.43 14.29] My scheme is:
- Use hessian detector and affnet to extract 3000 features and hardnet++ to extract descriptors which is 3000*128.
- Use Ransac to find good matches between each query and reference and use the sum of matchMask as score.
- Use revisitop to evaluate the result.
Can you give me some suggestions?
Thanks for your work! I find in the training and testing code, the "hardnet" descriptor is fixed. Weights are loaded from "HardNet++.pth" and not changed in the training process. I wonder how the original "HardNet++.pth" comes and why the hardnet model is not need to train again.