QVPR / Patch-NetVLAD

Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"
MIT License
510 stars 72 forks source link

Some questions about loss and GPU memory #47

Closed JinRanYAO closed 2 years ago

JinRanYAO commented 2 years ago

Hello Stephen, thanks for your excellent work. I have some questions about Patch-NetVLAD: Which loss function did you use in Patch-NetVLAD? Is it the triplet loss same as in NetVLAD? But I think the triplet loss is used to evaluate the full image, is it suitable for patch and how to modify it to use it to evaluate patch? And how much GPU memory I need to run the train code of Patch-NetVLAD? Looking forward to your reply!

Tobias-Fischer commented 2 years ago

Hi @JinRanYAO,

We indeed use the standard triplet loss, and train the global descriptors (for the full image). We then apply these learned descriptors on the patches. For more information, please see https://github.com/QVPR/Patch-NetVLAD/issues/27

I am not sure about exact numbers regarding GPU memory, but 8 or 12 Gb should be sufficient.

I hope this helps - feel free to reopen if you have more questions.