Lightning-AI / torchmetrics

Torchmetrics - Machine learning metrics for distributed, scalable PyTorch applications.
https://lightning.ai/docs/torchmetrics/
Apache License 2.0
2.1k stars 402 forks source link

Add Area Under PR- curve metric #1176

Closed rabinadk1 closed 2 years ago

rabinadk1 commented 2 years ago

🚀 Feature

There doesn't seem to be an easy way to calculate the AUPR using torchmetrics. If anybody has the recommended way, please comment below.

Motivation

The area under the precision-recall curve (AUC-PR) is a model performance metric for binary responses that is appropriate for rare events and not dependent on model specificity.

Pitch

I want a similar metric to AUROC where PR-curve is used instead of ROC.

Alternatives

I have used a functional precision-recall curve and AUC for those precision and recall values. Still, I find it hard for multi-label classification to see AUC for each class. So I fell back to using functional for both PR-curve and AUC; now, I get a lot of NaNs.

github-actions[bot] commented 2 years ago

Hi! thanks for your contribution!, great first issue!

saikatkumardey commented 2 years ago

Average precision is the same as AUPRC. Source

SkafteNicki commented 2 years ago

Based on @saikatkumardey comment it does not seem worthwhile implementing AUPRC as its own metric when the average precision metric is equivalent. However, I have added an note in the documentation for the upcoming classification refactor about this equvalence: https://github.com/Lightning-AI/metrics/pull/1195/commits/6b60bf318beb83b46925ee14497c2e3982749feb. Closing issue.