dreamquark-ai / tabnet

PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf
https://dreamquark-ai.github.io/tabnet/
MIT License
2.55k stars 470 forks source link

Additions to default metrics? #489

Open rawanmahdi opened 1 year ago

rawanmahdi commented 1 year ago

Feature request

It would be nice to be able to access a percision, recall, and F1 score as a default metric, or support a classification report output.

What is the expected behavior? Compute percision, recall and F1 scores based on model predictions on the dataset, either during training or after training.

What is motivation or use case for adding/changing the behavior? Working with imbalanced datasets, other metrics may conceal the true behaviour of the model. F1 scores tend to be more informative.

How should this be implemented in your opinion? Similar to sklearn's classification report, implemented on testing data after training, or as a tracked metric during training.

Are you willing to work on this yourself? yes