Closed huangmozhilv closed 5 years ago
In the layer_op
of class LossFunction
, there is a reduce_mean
taken over the loss_batch
. This divides by |K|
, because K
is the number of classes. I think this accounts for the |K|
in the denominator.
Yes, that's right. But besides, I think u_i should be powered by k in both the numerator and denominator, some code like tf.pow(u_i, k)
.
No, I think this k
just denotes that the probability is for the k
th class. Otherwise your loss function would depend on the order which you assign to segmentation labels (i.e. it would be different if background was 0 and foreground was 1 to if it was background: 1, foreground:0) which should not be the case.
If you are right, I think k should be subscript.
Hi @Zach-ER , I contacted the author of nnU-Net. Yes, you are right.
According to no-new Net, each u should be powered by k, however, this step is not implemented in your version (
def dice_plus_xent_loss
inloss_segmentation.py
).