gmberton / deep-visual-geo-localization-benchmark

Official code for CVPR 2022 (Oral) paper "Deep Visual Geo-localization Benchmark"
MIT License
186 stars 28 forks source link

Producing SARE loss results on vanilla NetVLAD (Vgg16 based) #20

Closed UsmanMaqbool closed 1 year ago

UsmanMaqbool commented 1 year ago

I can see loss function (sare-joint and sare-ind) in the code. How could I produce their paper results

Liu Liu, Hongdong Li, and Yuchao Dai. Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization. In IEEE International Conference on Computer Vision, 2019.

I tried but couldn't be successful.

python3 train.py --dataset_name=pitts30k --backbone=vgg16 --criterion=sare_joint

Could you please suggest something?

ga1i13o commented 1 year ago

Although theoretically our implementation of the loss respects the description of the SARE paper, we were not able to reproduce their results. for this reason it has not been included in our paper, and is only in the code

UsmanMaqbool commented 1 year ago

Thanks for your reply. I think, its due to the initial checkpoints used in NetVLAD and their codes. I'll try on this