Closed LKELN closed 1 year ago
Can you describe your experiment setting? For example, which script did you run? Did you modify anything in the code?
I used the experiment setting you provided, I used the train_global_retrieval.sh first to get a global retrieval model for rerank resume. Then I use train_reranking.sh and I did not change anything.
Except for the batchsize size, because I don't have that much memory
You should be able to get the same result if you are not using an extremely small batch size and follow all the instructions carefully. The learning rate must be tuned if you are using a very different batch size from our paper. There is no way for me to figure out the issue if the hyper-parameters (including package versions) and logs are not given. There are several steps that you should take to debug:
, or may you provide a finetune model for global retrieval? And the msls_v2_deits.pth is no finetune model for global retrieval? info.loginfo (1).log
Please read the instructions carefully. The global retrieval model is provided as "msls_v2_deits.pth" on the ReadMe page. The global retrieval model follows the standard pipeline of the VG benchmark and there is no finetuning. Please read the paper for more details.
Ok, Thanks for your reply!
The accuracy of my current training is not up to the accuracy in your paper, I have trained it in full twice, the first time in A800 R@1: 87.4, R@5: 93.8, R@10: 94.5, R@20: 95.4, R@100: 97.0,the second time in RTX3090 R@1: 84.5, R@5: 91.5, R@10: 93.2, R@20. 93.8, R@100: 95.5 Is this the dataset you used?