microsoft / MSMARCO-Passage-Ranking-Submissions

Submission archive for the MS MARCO passage ranking leaderboard
https://microsoft.github.io/MSMARCO-Passage-Ranking-Submissions/leaderboard
MIT License
12 stars 30 forks source link

How to evaluate results for past TREC DL tracks #67

Open liulizuel opened 1 year ago

liulizuel commented 1 year ago

I finetuned my model with train data, "Top 1000 Train top1000.train.tar.gz 175.0 GB 478,002,393 tsv: qid, pid, query, passage" from ( https://microsoft.github.io/msmarco/TREC-Deep-Learning-2019) And I got results on both dev "Top 1000 Dev top1000.dev.tar.gz 2.5 GB 6,668,967 tsv: qid, pid, query, passage" and test, "Test msmarco-passagetest2019-top1000.tsv 71 MB 189,877 tsv: qid, pid, query, passage" with the finetuned model.

However, I don’t know how to submit my result. I read the guideline, but it does not provide detailed information on submitting previous years’ results. Because I plan to do experiments on TREC 2019, 2020, 2021 and 2022 Deep Learning track, knowing how to evaluate these results is important for me.

Can you provide detailed information on how to evaluate or submit previous years’ results? Looking forward to your reply.