lexeme-dev / core

A project to develop novel algorithms and analysis techniques for legal research
4 stars 0 forks source link

Design new metric for recommender system performance #70

Open ProbablyFaiz opened 2 years ago

ProbablyFaiz commented 2 years ago

Currently, we use the variant of recall described in Huang 2021:

Our initial results have been very promising. The primary metric we are currently using is recall, the percentage of documents defined as relevant that we are successfully able to recommend. We adopt the measurement approach taken by Huang et. al (2021).

We select a random opinion in the federal corpus and remove it from our network (as if the opinion never existed). We input all but one of the opinion’s neighbors into the recommendation software. We measure whether the omitted neighbor was the top recommendation, in the top 5 recommendations, or in the top 20 recommendations.

This is alright, but leaves a lot to be desired with respect to a fuller understanding of our models' performance and its ability to surface useful cases. We've got some other ideas (to be documented at a later time) of what kinds of metrics might better serve us.

varun-magesh commented 2 years ago

Right now, I'm thinking precision-recall graphs. They're similar to AUROC but work better in unbounded result spaces (which ~1,000,00 essentially is).

https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html

They offer a threshold-agnostic and true-positive-set-size agnostic method of measuring recommender quality.