Closed jjerphan closed 6 years ago
Keras used to proposes those metrics before but they have been removed as they were approximated on batches. For more information see this issue.
Another package, keras-metrics
proposes ready to use metrics for Keras, but it seems that there is a problem with models that get saved using model.save()
— basically, metrics defined by this package aren't correctly saved/serialized in the .h5
file. See this issue.
As fchollet
suggests, the best option might be to use a custom workflow (hence our second scenario).
Moving on it now! 🏃
This has been made in #23.
This has been done ine #23.
Metrics from scikit-learn
has been used for the evaluation, mainly accuracy_score
, precision_score
, recall_score
, f1_score
, confusion_matrix
. More may be added in the future.
Accuracy is not sufficient to evaluate models. Different Metrics can be used, mainly:
This issue explore the evaluation process with metrics, there is two scenarios:
model.compile
andmodel.evaluate
do the jobcustom_metrics
argument when usingkeras.models.loadmodels
model.predict
.scikit-learn
proposes a bunch of metrics