Lightning-AI / torchmetrics

Machine learning metrics for distributed, scalable PyTorch applications.
https://lightning.ai/docs/torchmetrics/
Apache License 2.0
2.14k stars 408 forks source link

Retrieval metrics are misleading - here's how it should be done instead #2611

Open devinbost opened 4 months ago

devinbost commented 4 months ago

🚀 Request

Retrieval metrics should be more aligned with typical practices. Recommendation is below.

Explanation

The current best practice for calculating retrieval metrics follows this process:

  1. Calculate ground truth search labels. In the case of vector search, this would be via exact KNN search.
  2. Calculate predicted results. In the case of vector search, this would be retrieving ANN search results.
  3. Calculate score. The way a predicted result is determined to be relevant is determined by whether it appears in the ground truth list. Some implementations also consider the position.

Example (vector search recall)

In the case of top k recall for vector search:

  1. obtain the top k exact KNN results for the given query
  2. obtain the top k ANN results for the given query
  3. Calculate the percentage of ANN results that appear in the KNN result list
  4. Repeat for every query in the test set

Current implementation

The current implementation in torchmetrics expects indexes (corresponding to queries), predictions (probabilities used to rank, which could be similarity scores), targets (which are supposed to be ground truth labels).

What's misleading:

The problem is that the current implementation assumes that a 1:1 mapping exists between predictions and ground truth labels, which does not align with the industry practice.

How it should work instead

The method signature should look more like this: query/index (tensor), true_preds (tensor), true_targets (tensor), actual_preds (tensor), actual_targets (tensor), threshold (optional float), epsilon (optional float), similarity/distance (bool, defaults to distance)

With this data, the score calculation could be based on either:

  1. count. If the preds are distinct, then top k could be obtained by: a. sorting (true_preds, true_targets) by true_preds and filtering to top k items, b. sorting (actual_preds, actual_targets) by actual_preds and filtering to top k items c. performing the computation between the lists for each given query/index

  2. threshold. If preds are not distinct, then we would take top values obtained by a threshold, t, obtained by: a. sorting (true_preds, true_targets) by true_preds and filtering to top items where true_preds are <= t (if t is a distance) or >= t (if t is a similarity) b. sorting (actual_preds, actual_targets) by actual_preds and filtering to top items where actual_preds are <= (1 + epsilon) t (if t is a distance) or >= (1 - epsilon) t (if t is a similarity) where epsilon is a modifier to soften the filter on the actual side. c. performing the computation between the lists for each given query/index

An implementation like this would enable torchmetrics to calculate Recall, MAP, AUROC, NDCG, MRR, etc., based on industry-accepted practices.

Additional context

The well-known ANN-Benchmarks paper goes into detail on the recall calculation used in the example here.

image image

(Aumüller, M., Bernhardsson, E., & Faithfull, A. (2020). ANN-Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems, 87, 101374. Available at: https://arxiv.org/pdf/1807.05614.pdf )  

lantiga commented 4 months ago

Thanks for the report @devinbost, I totally see where you're coming from.

@SkafteNicki let's prioritize this issue. What's your thought on providing the additional signature?

SkafteNicki commented 4 months ago

Hi @devinbost, thanks for raising this issue. Tagging @lucadiliello for opinions, because he has implemented most of the retrieval metrics. I am more than willing to change the implementation based on best practice in the industry. Have only worked shortly on retrieval problems, so not a expert.

The only problem in my view we have to solve is the backwards compatibility. I am going to assume that some of our user base are using the retrieval metrics as they are right now, as this is only raised as a issue now (the alternative is that non are using them). Thus, I would preferably support both cases. @devinbost does the new propose interface do this, or can we discuss how to make things backwards compatible.

lantiga commented 4 months ago

They are definitely used: https://github.com/search?q=RetrievalMRR&type=code

So we need to come up with a backwards compatible design. We could add a new submodule, or a new set of classes in the retrieval submodule. Given the different signatures I wouldn't complicate the existing classes, but that's me.

@SkafteNicki @lucadiliello @devinbost what would be a good name for the submodule that is not retrieval? Is search a good enough one?

devinbost commented 4 months ago

I deeply appreciate all the attention on this!!

@lantiga search is a good name. The implementation that I recommended above would definitely apply in search use cases.

SkafteNicki commented 4 months ago

@devinbost Is there any of the metrics from the retrieval domain you are particular interested in? Just to prioritize what to implement first

devinbost commented 4 months ago

@SkafteNicki The first one would be recall. After that, MRR and then NDCG. Those the the most commonly used metrics in the industry and literature.