Closed franchuterivera closed 3 years ago
Add a checker to make sure we create an instance of each metric object and make sure it is working. For example:
https://github.com/LMZimmer/Auto-PyTorch_refactor/blob/acc7eebe9641a995933c9243c1308b7e23b7ab08/autoPyTorch/pipeline/components/training/metrics/Accuracy.py#L22
Assumes that pytorch lighting has a pytorchlighting.classification.Accuracy object but it should be pytorch_lightning.metrics.classification.Accuracy
Isn't this being done in test_metrics.py
add a method for testing every metric and not just the supported metrics
Add a checker to make sure we create an instance of each metric object and make sure it is working. For example:
https://github.com/LMZimmer/Auto-PyTorch_refactor/blob/acc7eebe9641a995933c9243c1308b7e23b7ab08/autoPyTorch/pipeline/components/training/metrics/Accuracy.py#L22
Assumes that pytorch lighting has a pytorchlighting.classification.Accuracy object but it should be pytorch_lightning.metrics.classification.Accuracy