NifTK / NiftyNet

[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
http://niftynet.io
Apache License 2.0
1.37k stars 404 forks source link

dice_plus_xent_loss seems to be different from that of no-new Net #297

Closed huangmozhilv closed 5 years ago

huangmozhilv commented 5 years ago

According to no-new Net, each u should be powered by k, imagehowever, this step is not implemented in your version (def dice_plus_xent_loss in loss_segmentation.py).

Zach-ER commented 5 years ago

In the layer_op of class LossFunction, there is a reduce_mean taken over the loss_batch. This divides by |K|, because K is the number of classes. I think this accounts for the |K| in the denominator.

huangmozhilv commented 5 years ago

Yes, that's right. But besides, I think u_i should be powered by k in both the numerator and denominator, some code like tf.pow(u_i, k).

Zach-ER commented 5 years ago

No, I think this k just denotes that the probability is for the kth class. Otherwise your loss function would depend on the order which you assign to segmentation labels (i.e. it would be different if background was 0 and foreground was 1 to if it was background: 1, foreground:0) which should not be the case.

huangmozhilv commented 5 years ago

If you are right, I think k should be subscript.

huangmozhilv commented 5 years ago

Hi @Zach-ER , I contacted the author of nnU-Net. Yes, you are right.