tensorflow / decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Apache License 2.0
663 stars 110 forks source link

How to check the F1 score for multi-class classification task? #22

Closed greenairy closed 3 years ago

greenairy commented 3 years ago

I have succussfuly run this Decision Forest Algorithm. However, my data has severe imbalance between categories, in which case the Accuracy is not fair to evaluate the model performance. I would like to ask are there options of f1 , precision , and recall applied as the metrics?

achoum commented 3 years ago

Hi,

You can use any Keras's metric. See the list of keras metrics.

The following example computes the accuracy, AUC as well as the F1 score, precision and recall @ threshold=0.5 for a binary classification TF-DF model.

# Install TF Addons to compute the F1 score.
!pip install tensorflow_addons -U --quiet
import tensorflow_addons as tfa

model.compile(metrics=[
                       "accuracy",
                       tf.keras.metrics.AUC(num_thresholds=10000),
                       tf.keras.metrics.Precision(thresholds=0.5),
                       tf.keras.metrics.Recall(thresholds=0.5),
                       tfa.metrics.F1Score(num_classes=1, threshold=0.5)
                       ])
model.evaluate(test_ds,return_dict=True)

# Note: `tfa.metrics.F1Score` require `num_classes=1` for binary classification with only positive label probability predictions.
greenairy commented 3 years ago

Hi,

You can use any Keras's metric. See the list of keras metrics.

The following example computes the accuracy, AUC as well as the F1 score, precision and recall @ threshold=0.5 for a binary classification TF-DF model.

# Install TF Addons to compute the F1 score.
!pip install tensorflow_addons -U --quiet
import tensorflow_addons as tfa

model.compile(metrics=[
                       "accuracy",
                       tf.keras.metrics.AUC(num_thresholds=10000),
                       tf.keras.metrics.Precision(thresholds=0.5),
                       tf.keras.metrics.Recall(thresholds=0.5),
                       tfa.metrics.F1Score(num_classes=1, threshold=0.5)
                       ])
model.evaluate(test_ds,return_dict=True)

# Note: `tfa.metrics.F1Score` require `num_classes=1` for binary classification with only positive label probability predictions.

Hi,

I believe these metrics are only suitable for binary classification. To evaluate a multi-class classification, the only way is to define metric functions and compile them with the model. I will figure it out anyway.

Thanks a lot for your help.

mainguyenanhvu commented 2 years ago

@nuanv Hi, Have you coded for multi-class classfication? Can you share your code with me?

mainguyenanhvu commented 2 years ago

@achoum @nuanv I had issue with keras.metrics.AUC(name='auc', multi_label=True, num_labels=4).

It completed training and returned error:

ValueError: Number of labels is not consistent..  Specified by tensor ExpandDims:0 dimension 1.  Tensor assert_shapes/ReadVariableOp:0 dimension 1 must have size 1.  Received size 4, shape (200, 4)

Please help me to solve it.