Open NguyenKhacPhuc opened 3 years ago
@NguyenKhacPhuc I am getting the same error, Not sure what am i supposed to do! I trained before the model on a dataset, and currently got new data, and all of a sudden i got this error! I tried converting tensor x in rpn_loss_regr_fixed_num to float 32 and the error seems persistent!
I solved this by delete the last max pooling layer in nn base. Hope it will help
Vào 12:26, Th 5, 26 thg 11, 2020 Rodinaalaa notifications@github.com đã viết:
I am getting the same error, Not sure what am i supposed to do! I trained before the model on a dataset, and currently got new data, and all of a sudden i got this error! I tried converting tensor x in rpn_loss_regr_fixed_num to float 32 and the error seems persistent!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/RockyXu66/Faster_RCNN_for_Open_Images_Dataset_Keras/issues/71#issuecomment-734082460, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRA7XQDYFHHXD6ZLL7M2JDSRXRJPANCNFSM4TNOVGNQ .
Exception caused by some images, i don't know why, i'm investigating.
I fixed it by casting in loss function
def class_loss_regr_fixed_num(y_true, y_pred):
num_classes_float = K.cast(num_classes, 'float32')
indice_slice = K.cast(4 * num_classes_float, 'int32')
x = K.cast(y_true[:, :, indice_slice:], 'float32') - K.cast(y_pred, 'float32')
x_abs = K.abs(x)
x_bool = K.cast(K.less_equal(x_abs, 1.0), 'float32')
deviser = K.sum(epsilon + K.cast(y_true[:, :, :indice_slice],"float32"))
devided = K.sum(K.cast(y_true[:, :, :indice_slice], 'float32') * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5)))
return lambda_cls_regr * devided / deviser
By the time i'm tranning, it's raising this error, what could be the cause? Thanks