Alibaba-NLP / RankingGPT

code for paper 《RankingGPT: Empowering Large Language Models in Text Ranking with Progressive Enhancement》
29 stars 2 forks source link

much baseline diff compared rankllama #2

Open stanpcf opened 7 months ago

stanpcf commented 7 months ago
image

such as BM25'avg in beir should be 43.7

zlh-source commented 7 months ago
  1. The Parameter Configurations of the paper RankT5 shows that RankT5 uses T5-large (770M) for initialization.

  2. The results in Tables 3 and 4 in our paper are both ranking the top 1000 documents retrieved by BM25. According to our email communication with the authors of RankLLaMA, the RankLLaMA results in their paper are based on the top 100 documents retrieved by RepLLaMA.

  3. The performance of RankLLaMA in our paper can be reproduced through the official script.