Open LostInDarkMath opened 1 year ago
@LostInDarkMath,
Apologies for the delay. Here in the code you are trying to use loss=CategoricalCrossentropy()
, where Accuracy is not the better option. Instead you can use keras.metrics.CategoricalAccuracy for the accurate result. I tried to execute with keras.metrics.CategoricalAccuracy and it provided the required output. Kindly find the gist of it here.
https://www.tensorflow.org/api_docs/python/tf/keras/metrics/CategoricalAccuracy Thank you!
Thank your for the clarification.
That means that the string 'accuracy'
is resolved differently depending on the loss function? Or does 'accuracy'
always resolve to eras.metrics.CategoricalAccuracy
? And where is this behavior documented? Is there something like a mapping table that visualizes this mapping? Would be very nice :)
This issue is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you.
@LostInDarkMath Here the following API are technically same.
keras.metrics.Accuracy
keras.metrics.BinaryAccuracy
From doc,
keras.metrics.Accuracy(name="accuracy", ...) keras.metrics.BinaryAccuracy(name="binary_accuracy", ...) Calculates how often predictions equal labels. This metric creates two local variables, total and count that are used to compute the frequency with which y_pred matches y_true. This frequency is ultimately returned as binary accuracy: an idempotent operation that simply divides total by count.
And when we use string identifier, for example [accuracy
], later it converts to the appropriate metrics based on the labels and logits, source.
When you pass the strings 'accuracy' or 'acc', we convert this to one of
tf.keras.metrics.BinaryAccuracy
,tf.keras.metrics.CategoricalAccuracy
,tf.keras.metrics.SparseCategoricalAccuracy
based on the shapes of the targets and of the model output. We do a similar
@tilakrayal Apart from the abvove clarification, there may be potential bug. Please check this question on stack overflow. That is very similar to this issue.
From the model.copile document:
List of metrics to be evaluated by the model during training and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=['accuracy']. A function is any callable with the signature result = fn(y_true,y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={'output_a':'accuracy', 'output_b':['accuracy', 'mse']}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[['accuracy'], ['accuracy', 'mse']] or metrics=['accuracy', ['accuracy', 'mse']]. When you pass the strings 'accuracy' or 'acc', we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the shapes of the targets and of the model output. We do a similar conversion for the strings 'crossentropy' and 'ce' as well. The metrics passed here are evaluated without sample weighting; if you would like sample weighting to apply, you can specify your metrics via the weighted_metrics argument instead.
When you use metrics.Accuracy, it resolves to BinaryAccuracy
as stated in the doc of Accuracy
. Whereas, when using string accuracy
it resolves to BinaryAccuracy or CategoricalAccuracy, based on the target shape. This was also explained by @innat above. This may be confusing, so we recommend either explicitly using BinaryAccuracy
or CategoricalAccuracy
if you don't want to use the string accuracy
for the metrics.
The same behavior exists in Keras 3 as well. Should we deprecate use of metrics.Accuracy
or mimic its usage to string metric of accuracy
?
The same behavior exists in Keras 3 as well. Should we deprecate use of
metrics.Accuracy
or mimic its usage to string metric ofaccuracy
?
The metrics.Accuracy
can be removed. It makes confusion, especially among the beginners a lot.
System information.
pip install tensorflow
2.13.0
3.11.5
Describe the problem. If I write
Accurarcy()
in themetrics
list, it does not work. But the Stringaccuracy
does work. According to the docs, bith sould work. See example code below.Describe the current behavior.
Describe the expected behavior.
Contributing.
Standalone code to reproduce the issue.
Source code / logs. Nothing.