Open hachreak opened 5 years ago
Hi, @hachreak, thank you for posting the issue. Are the returned values also bigger than 1?
Hi @ybubnov thanks for reply. What do you mean with the returned values? I have only configured:
model.compile(
optimizer=opt.Adam(lr=1e-4),
loss=losses,
metrics=[km.binary_f1_score()]
)
It works well until the end of the epoch.. it's very strange. :smile:
@hachreak, I see, if it is possible could you show runnable sample of code and data you feed to the model, this will much help troubleshooting.
Most common thing why is this happening: there is an issue with the data being feed to the model.
I made a new CNN and new training. This time, after ~200 images, it's the precision and f1-score that are going negative!
269/39088 [..............................] conv2d_1_acc: 0.6118 - conv2d_1_precision: -0.0089 - conv2d_1_recall: 0.5345 - conv2d_1_f1_score: -0.0181 - conv2d_1_false_positive: -1186967987.0000
I was checking the code of precision and recall.
The only difference is the use of false positive
instead of false positive
.
From the code of false positive
, the only way to go negative looks when y_true is bigger than 1.
But I checked my code and doesn't looks the case because I'm forcing to be 0 or 1.
Am I doing something wrong?
Any suggestion is really appreciated? Thanks :smile:
The are two possible ways to get negative false positive counter:
Feed training data where the output (actual Y value) is not binary, so this code strikes:
class false_positive(layer):
# ...
def __call__(self, y_true, y_pred):
y_true, y_pred = self.cast(y_true, y_pred)
neg_y_true = 1 - y_true # <- if y_true is out of [0, 1] range value can be negative.
fp = K.sum(neg_y_true * y_pred)
# ...
There is some issue with data conversion that causes self.cast(y_true, y_pred)
return incorrect result.
Hi everybody, I was using the library on my training and everything looks good. This is an example:
Until it arrives to the end of the epoch where it has some weird behaviors:
The precision / recall / f1-score for the validation look good, but for the training they have a value bigger than 1. They should remain always less then 1, is it? Thanks