Closed zhangpj closed 4 years ago
--tuple-size
in the training scripts to adopt more triplets on one GPU for training. In my experiments, I adopted 4 GPUs and one triplet on each GPU, thus a batch of 4 triplets is used. In case the GPU memory is not enough for 4 triplets on only one 2080TI, maybe you need to decrease the learning rate to fit your batch size.@yxgeee All right, thanks for your suggestion, I will have a try.
@yxgeee Hi, in your paper, you also evaluated SFRS on Oxford 5k, Paris 6k and Holidays datasets, can you share source codes that evaluate SFRS on those datasets, or tell me how do you evaluate SFRS on Oxford 5k, Paris 6k and Holidays datasets?
My colleague helps me test SFRS on retrieval datasets, and maybe I will merge these code in this repo after re-organization. We strictly follow all the same settings (e.g. image size, augmentation, etc.) as SARE and NetVLAD. So you could also refer to their code for evaluation details.
Ok, thank you.
Hi, thank you for sharing this project. Good job! I tried to run this project, but there are some questions that confuse me.
1 When running
train_sfrs_dist.sh
, Loss_hard and Loss_soft are like follows:Loss_hard << soft-weight(0.5)*Loss_soft
, so, does Loss_hard make a small contribution or even negligible? and Loss_soft does not seem to converge, have you ever seen a similar phenomenon when you train the network?2 The results on pitts250K of the best model in my reproduction are slightly lower than the results of your paper.
89.8% | 95.9% | 97.3% VS 90.7% | 96.4% | 97.6%
. The best model in my reproduction is output of 5th epoch of the third generation, instead of convergence at the fourth generation as mentioned in the paper. Is the best model the output of the last iteration when you train?3 I only use one GPU (2080ti), the other parameters are default. I don't Know if the inferior results are duo to too few GPUs, or is there something else I need to pay attention to?