bytedance / R2Former

Official repository for R2Former: Unified Retrieval and Reranking Transformer for Place Recognition
Apache License 2.0
83 stars 6 forks source link

The accuracy of my current training is not up to the accuracy in your paper #8

Closed LKELN closed 1 year ago

LKELN commented 1 year ago

The accuracy of my current training is not up to the accuracy in your paper, I have trained it in full twice, the first time in A800 R@1: 87.4, R@5: 93.8, R@10: 94.5, R@20: 95.4, R@100: 97.0,the second time in RTX3090 R@1: 84.5, R@5: 91.5, R@10: 93.2, R@20. 93.8, R@100: 95.5 Is this the dataset you used? b4fa6c8c1a2ee263b6a5da88903f579

Jeff-Zilence commented 1 year ago

Can you describe your experiment setting? For example, which script did you run? Did you modify anything in the code?

LKELN commented 1 year ago

I used the experiment setting you provided, I used the train_global_retrieval.sh first to get a global retrieval model for rerank resume. Then I use train_reranking.sh and I did not change anything.

LKELN commented 1 year ago

Except for the batchsize size, because I don't have that much memory

Jeff-Zilence commented 1 year ago

You should be able to get the same result if you are not using an extremely small batch size and follow all the instructions carefully. The learning rate must be tuned if you are using a very different batch size from our paper. There is no way for me to figure out the issue if the hyper-parameters (including package versions) and logs are not given. There are several steps that you should take to debug:

  1. download the pretrained model and use test.sh. See if you can get R@1 around 89.7.
  2. use train_global_retrieval.sh and see if you can get R@1 over 79.
  3. use the first command of train_reranking.sh and see if you can get R@1 around 88.4.
  4. use the second command of train_reranking.sh and see if you can get R@1 around 89.7.
  5. If you want to train it end-to-end. You can directly run train_end_to_end.sh and the R@1 should be around 87.3.
LKELN commented 1 year ago
  1. I can get R@1 around 89.7 2.I use train_global_retrieval.sh and get R@1 74.5 Maybe need to change the learn rate for train_global_retrieval.sh

, or may you provide a finetune model for global retrieval? And the msls_v2_deits.pth is no finetune model for global retrieval? info.loginfo (1).log

Jeff-Zilence commented 1 year ago

Please read the instructions carefully. The global retrieval model is provided as "msls_v2_deits.pth" on the ReadMe page. The global retrieval model follows the standard pipeline of the VG benchmark and there is no finetuning. Please read the paper for more details.

LKELN commented 1 year ago

Ok, Thanks for your reply!