Closed CostantinoGrana closed 4 years ago
Agghhh yes I realised now that I have to reshape first.... ok I will fix it.
BTW, when you have time I'd like you to explain me why you implemented it that way. I couldn't understand the rationale behind the implementation.
fixed
BTW, when you have time I'd like you to explain me why you implemented it that way. I couldn't understand the rationale behind the implementation.
The implementation follows this
https://lars76.github.io/neural-networks/object-detection/losses-for-segmentation/
In particular from this point: "All loss functions defined so far have always returned tensors. Another possibility is to return a single scalar for each image. This is especially popular when combining loss functions. DL can be redefined as follows:"
def dice_loss(y_true, y_pred): numerator = 2 tf.reduce_sum(y_true y_pred, axis=(1,2,3)) denominator = tf.reduce_sum(y_true + y_pred, axis=(1,2,3))
return 1 - numerator / denominator
The unique difference is that I don't have the "1-"
And later says:
In general, dice loss works better on images than on single pixels.
Now in develop branch you can see an example: 14_mnist_losses.cpp where I implemented the dice loss:
layer dice_loss(vector
return Diff(1.0,Div(num,den)); }
This is the easy way since the derivative is automatic ;)
The mains difference in EDDL is that axis={0,1,2} instead of {1,2,3} in Keras since for EDDL 0 is the first working dimension apart from the batch dimension that is internally managed. Therefore right now you can not make a reduction along the batch dimension.
Is the same that input layer shape is {784} instead of {100,784} (for a batch_size=100)
So in the model definition you can check that batch dimension is not present. So the same for dealing with reductions.
While the metric works, we get that exception with the Dice Loss.
Mind that this loss is just an additional feature to check if it improves (it is reported to improve on Kaggle) and not strictly needed for the Hackathon.