MrGiovanni / UNetPlusPlus

[IEEE TMI] Official Implementation for UNet++
Other
2.26k stars 538 forks source link

bce_dice_loss negative loss #49

Open jcarta opened 4 years ago

jcarta commented 4 years ago

Anyone else getting a negative loss value when using bce_dice_loss?

def bce_dice_loss(y_true, y_pred):
    return 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

def dice_coef_loss(y_true, y_pred):
    return 1. - dice_coef(y_true, y_pred)

def dice_coef(y_true, y_pred):
    smooth = 1.
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
Swathi-Guptha commented 4 years ago

@jcarta did you get any solution for it?

MrGiovanni commented 4 years ago

Hi @jcarta and @Swathi-Guptha

If you want to have a positive loss value, you can simply add a constant number 1. to the loss. That is

def bce_dice_loss(y_true, y_pred):
    return 1.0 + 0.5*binary_crossentropy(y_true, y_pred) - dice_coef(y_true, y_pred)

Please note that this constant number would not impact gradient descent.

Hope this helps you.

Zongwei

Swathi-Guptha commented 4 years ago

So, the formula used is perfect for getting accurate model training?

I was reading about bce dice loss and came across a different formula: def DiceBCELoss(targets, inputs, smooth=1e-6):

#flatten label and prediction tensors
inputs = K.flatten(inputs)
targets = K.flatten(targets)

BCE =  binary_crossentropy(targets, inputs)
intersection = K.sum(K.dot(targets, inputs))    
dice_loss = 1 - (2*intersection + smooth) / (K.sum(targets) + K.sum(inputs) + smooth)
Dice_BCE = BCE + dice_loss

return Dice_BCE

May i know what is the difference between two?