Open devinbost opened 4 months ago
Thanks for the report @devinbost, I totally see where you're coming from.
@SkafteNicki let's prioritize this issue. What's your thought on providing the additional signature?
Hi @devinbost, thanks for raising this issue. Tagging @lucadiliello for opinions, because he has implemented most of the retrieval metrics. I am more than willing to change the implementation based on best practice in the industry. Have only worked shortly on retrieval problems, so not a expert.
The only problem in my view we have to solve is the backwards compatibility. I am going to assume that some of our user base are using the retrieval metrics as they are right now, as this is only raised as a issue now (the alternative is that non are using them). Thus, I would preferably support both cases. @devinbost does the new propose interface do this, or can we discuss how to make things backwards compatible.
They are definitely used: https://github.com/search?q=RetrievalMRR&type=code
So we need to come up with a backwards compatible design. We could add a new submodule, or a new set of classes in the retrieval submodule. Given the different signatures I wouldn't complicate the existing classes, but that's me.
@SkafteNicki @lucadiliello @devinbost what would be a good name for the submodule that is not retrieval
? Is search
a good enough one?
I deeply appreciate all the attention on this!!
@lantiga search
is a good name. The implementation that I recommended above would definitely apply in search use cases.
@devinbost Is there any of the metrics from the retrieval domain you are particular interested in? Just to prioritize what to implement first
@SkafteNicki The first one would be recall. After that, MRR and then NDCG. Those the the most commonly used metrics in the industry and literature.
🚀 Request
Retrieval metrics should be more aligned with typical practices. Recommendation is below.
Explanation
The current best practice for calculating retrieval metrics follows this process:
Example (vector search recall)
In the case of top k recall for vector search:
Current implementation
The current implementation in torchmetrics expects indexes (corresponding to queries), predictions (probabilities used to rank, which could be similarity scores), targets (which are supposed to be ground truth labels).
What's misleading:
The problem is that the current implementation assumes that a 1:1 mapping exists between predictions and ground truth labels, which does not align with the industry practice.
How it should work instead
The method signature should look more like this: query/index (tensor), true_preds (tensor), true_targets (tensor), actual_preds (tensor), actual_targets (tensor), threshold (optional float), epsilon (optional float), similarity/distance (bool, defaults to distance)
With this data, the score calculation could be based on either:
count. If the preds are distinct, then top k could be obtained by: a. sorting (true_preds, true_targets) by true_preds and filtering to top k items, b. sorting (actual_preds, actual_targets) by actual_preds and filtering to top k items c. performing the computation between the lists for each given query/index
threshold. If preds are not distinct, then we would take top values obtained by a threshold, t, obtained by: a. sorting (true_preds, true_targets) by true_preds and filtering to top items where true_preds are <= t (if t is a distance) or >= t (if t is a similarity) b. sorting (actual_preds, actual_targets) by actual_preds and filtering to top items where actual_preds are <= (1 + epsilon) t (if t is a distance) or >= (1 - epsilon) t (if t is a similarity) where epsilon is a modifier to soften the filter on the actual side. c. performing the computation between the lists for each given query/index
An implementation like this would enable torchmetrics to calculate Recall, MAP, AUROC, NDCG, MRR, etc., based on industry-accepted practices.
Additional context
The well-known ANN-Benchmarks paper goes into detail on the recall calculation used in the example here.
(Aumüller, M., Bernhardsson, E., & Faithfull, A. (2020). ANN-Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems, 87, 101374. Available at: https://arxiv.org/pdf/1807.05614.pdf ) Â