Open oO0oO0oO0o0o00 opened 3 years ago
Sorry for the late reply. You have to use tfa.losses.SigmoidFocalCrossEntropy(reduction=tf.keras.losses.Reduction.AUTO)
to reduce the loss to scalar. I'm not sure why we make it default to NONE
. @AakashKumarNain Could you confirm that it's an issue or not?
We have a little bit of Doc here on the reductuion parameter: https://github.com/tensorflow/models/blob/master/official/vision/keras_cv/losses/focal_loss.py#L37
@WindQAQ Yes, that needs to be changed to AUTO
and we need to make a few other changes as well. But I won't be able to fix it before next week.
I put this in the ecosystem review in the meantime cause I want to check how we want to handle this duplicated but not strictly aligned implementations.
Agreed
Sorry for the late reply. You have to use
tfa.losses.SigmoidFocalCrossEntropy(reduction=tf.keras.losses.Reduction.AUTO)
to reduce the loss to scalar. I'm not sure why we make it default toNONE
. @AakashKumarNain Could you confirm that it's an issue or not?
Thanks it works. QAQ
I am facing the same issue, however, using tfa.losses.SigmoidFocalCrossEntropy(reduction=tf.keras.losses.Reduction.AUTO) worked like a charm.
https://github.com/tensorflow/models/blob/master/official/vision/keras_cv/losses/focal_loss.py#L37
This link is not working. Can you please share the updated link?
@ravinderkhatri Keras-cv Is under refactoring.
We have a PR at https://github.com/tensorflow/addons/pull/2422
Having the same issue but setting reduction=tf.keras.losses.Reduction.AUTO
fixed it. Surprised this isn't the default in tensorflow-addons
We have already official upstream APIs now: https://github.com/keras-team/keras-cv/issues/1117
System information
conda install
), binarypip install tensorflow-addons==0.11.2
)tfa
requires newertf
tf.keras
, not standalonekeras
Describe the bug I have L2 kernel regularizer set for some of the (keras) layers.
tfa.losses.SigmoidFocalCrossEntropy()
was used as the loss function. After the model being built and compiled,model.fit
was called and the following exception occurred:The full stack trace is too long and would be appended at the tail.
Code to reproduce the issue
Run the following code:
And the above mentioned exception popped out. By removing
kernel_regularizer='l2',
the exception was gone and the training progress bar appeared as expected.Other info / logs
Full stack trace: (You may want to skip it)
Full stack trace compiled with
run_eagerly=True
may be provided if requested.Thanks~