Closed adamtupper closed 11 months ago
Hi, these would be some exciting features to be added to the package. If you could open a PR for this that would be very helpful!
Great! I'll work on this when I get a chance.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
🚀 Feature
Generalize the evaluation/checkpointing behaviour to support checkpointing and "best" performance tracking using other evaluation metrics.
Motivation
Currently the best model is saved based on the validation accuracy. However, in some cases this is not the metric we are most interested in (e.g., in imbalanced settings we might prefer to maximize the balanced accuracy instead).
Pitch
Add a configuration parameter (e.g.,
eval_metric
) that defines the metric that should used to keep track of the best model. A set of metrics could be provided to choose from (starting with those that are already tracked, e.g., accuracy, balanced accuracy, precision, etc.). Ideally, this would be implemented in such a way that the set of metrics could be easily added to over time (without requiring many small changes scattered throughout the codebase).Alternatives
None
Additional context
None