cvangysel / pytrec_eval

pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
http://ilps.science.uva.nl/
MIT License
282 stars 32 forks source link

RelevanceEvaluator breaks when evaluating on multiple runs #38

Open seanmacavaney opened 2 years ago

seanmacavaney commented 2 years ago

As identified by @grodino in https://github.com/terrierteam/ir_measures/issues/42

In short, when a measure with multiple cutoffs are provided to RelevanceEvaluator, only one is returned on subsequent invocations.

>>> import pytrec_eval
>>> qrel = {
>>>   '0': {'D0': 0, 'D1': 1, 'D2': 1, 'D3': 1, 'D4': 0},
>>>   '1': {'D0': 1, 'D3': 2, 'D5': 2}
>>> }
>>> run = {
>>>   '0': {'D0': 0.8, 'D2': 0.7, 'D1': 0.3, 'D3': 0.4, 'D4': 0.1},
>>>   '1': {'D1': 0.8, 'D3': 0.7, 'D4': 0.3, 'D2': 0.4, 'D10': 8.}
>>> }
>>> evaluator = pytrec_eval.RelevanceEvaluator(qrel, {'map', 'ndcg_cut.10,100,2'})
>>> print(evaluator.evaluate(run))
{'0': {'map': 0.6388888888888888, 'ndcg_cut_2': 0.38685280723454163, 'ndcg_cut_10': 0.7328286204777911, 'ndcg_cut_100': 0.7328286204777911}, '1': {'map': 0.1111111111111111, 'ndcg_cut_2': 0.0, 'ndcg_cut_10': 0.26582598262939583, 'ndcg_cut_100': 0.26582598262939583}}
>>> print(evaluator.evaluate(run))
{'0': {'map': 0.6388888888888888, 'ndcg_cut_10': 0.7328286204777911}, '1': {'map': 0.1111111111111111, 'ndcg_cut_10': 0.26582598262939583}}
# ^ second invocation is missing ndcg_cut_2, ndcg_cut_100