cvangysel / pytrec_eval

pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
http://ilps.science.uva.nl/
MIT License
290 stars 32 forks source link

ndcg_cut.10 and -M 10 ndcg are different when multiple level of relevance is given. #40

Open Taosheng-ty opened 1 year ago

Taosheng-ty commented 1 year ago

image ndcg_cut.10 and -M 10 ndcg are different when multiple level of relevance is given. However, when binary relevance is given, it is the same.

seanmacavaney commented 1 year ago

Hey @Taosheng-ty,

This issue should probably go to the https://github.com/usnistgov/trec_eval repository instead. But the difference between the options is:

-M k cuts off the run at rank k -- but leaves the ideal DCGs (against which nDCG is normalised) untouched. It's as if your search engine only returns k results, but there can still be >k perfectly relevant documents.

ndcg_cut.k cuts off both the run and the ideal DCG values; its as if there were only k perfectly relevant documents.

I can't really think of a situation where you'd want to use the -M option with nDCG.

- sean

Taosheng-ty commented 1 year ago

Thanks, Sean.