cvangysel / pytrec_eval

pytrec_eval is an Information Retrieval evaluation tool for Python, based on the popular trec_eval.
http://ilps.science.uva.nl/
MIT License
282 stars 32 forks source link

Using pytrec_eval.RelevanceEvaluator, do we sort the results internally or use as it is provides? #53

Open SDcodehub opened 1 month ago

SDcodehub commented 1 month ago

I am using

        evaluator = pytrec_eval.RelevanceEvaluator(qrels, {map_string, ndcg_string, recall_string, precision_string})
        scores = evaluator.evaluate(results)

Now result is {qid: {docid: embeddding_score, docid: embeddding_score} }

so we sort the results using scores before using for metrics calculation? I am assuming it has to be as dict will not have fix indexing.

can we avoid this sorting? in that case do I have to manipulate the scores manually to achieve desired results?

seanmacavaney commented 1 month ago

trec_eval (and by extension pytrec_eval) works by sorting the docids by the scores you provide (descending). If you want a specific order, you can manipulate the scores in a way that gives the desired effect.

Do you think it would be helpful to allow providing a list of docids instead of a dict, and using that order?