QVPR / Patch-NetVLAD

Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"
MIT License
510 stars 72 forks source link

How about the training time-consuming of hard samples mining module? #79

Closed whu-lyh closed 1 year ago

whu-lyh commented 1 year ago

Hi @Tobias-Fischer , may I ask about the time-consuming during training? especially how the cache parameter effects the hard sample mining module at here. I try to use this scheme in my own retrieval task but it cost much more time. By the way, I use the default cache parameters

Tobias-Fischer commented 1 year ago

Are you asking how long it took to train the model? Also, you say it takes “much more time” - compared to what?

whu-lyh commented 1 year ago

Oops, I mean the time consumption when the hard sample mining is used, compared to the training without mining. I found there are two triplet samples generation modes, especially for negative sample generation, with or without network forward. Code at here. Besides, I want to know more about the cached subset style mine strategy, can you give some related papers? Thanks.

Tobias-Fischer commented 1 year ago

Hi, As written in the comments, if there is no network used, then there is no hard negative mining being done: https://github.com/QVPR/Patch-NetVLAD/blob/cba383478e6b656c76a8c5034a3681f69ab59ddc/patchnetvlad/training_tools/msls.py#L421-L423

The relevant code is from https://github.com/mapillary/mapillary_sls which also has a nice paper attached where you can read more.