huggingface / datasets

🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
https://huggingface.co/docs/datasets
Apache License 2.0
19.1k stars 2.65k forks source link

Add support for continuous metrics (RMSE, MAE) #3608

Closed ck37 closed 2 years ago

ck37 commented 2 years ago

Is your feature request related to a problem? Please describe.

I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome.

Describe the solution you'd like I would like to be able to tag our models on the Hub with the following metrics:

Describe alternatives you've considered

I don't know if there are any alternatives.

Additional context Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large

Thanks, Chris

ariG23498 commented 2 years ago

Hey @ck37

You can always use a custom metric as explained in this guide from HF.

If this issue needs to be contributed to (for enhancing the metric API) I think this link would be helpful for the MAE metric.

callmekofi commented 2 years ago

You can use a local metric script just by providing its path instead of the usual shortcut name

dnaveenr commented 2 years ago

self-assign I have starting working on this issue to enhance the metric API.