MrGiovanni / UNetPlusPlus

[IEEE TMI] Official Implementation for UNet++
Other
2.3k stars 541 forks source link

Question : why use " l = weights[0] * self.loss(x[-1], y[0])" in loss_functions/deep_supervision.py ? #75

Open cuihu1998 opened 3 years ago

cuihu1998 commented 3 years ago

I make some experiments on LiTS2017, but cannot get correct results as you provide. Hoping for your help!

these is the codes in loss_functions/deep_supervision.py: `

def forward(self, x, y):
    assert isinstance(x, (tuple, list)), "x must be either tuple or list"
    assert isinstance(y, (tuple, list)), "y must be either tuple or list"
    if self.weight_factors is None:
        weights = [1] * len(x)
    else:
        weights = self.weight_factors

    l = weights[0] * self.loss(x[-1], y[0])
    #for i in range(1, len(x)):
    #    if weights[i] != 0:
    #        l += weights[i] * self.loss(x[i], y[0])
    return l

`

In nnUNet , the code is " l = weights[0] * self.loss(x[0], y[0])" ,because the segment of nnUNet network returned by :

`

    if self._deep_supervision and self.do_ds:
        return tuple([seg_outputs[-1]] + [i(j) for i, j in
                                          zip(list(self.upscale_logits_ops)[::-1], seg_outputs[:-1][::-1])])
    else:
        return seg_outputs[-1]

`

x[0] should be the largest spatial segment, so use x[0].

why UNetPlusPlus use x[-1] ? Doesn't it mean output of Layer1? And why the following code in deep_supervision.py is annotationed?

`

    #for i in range(1, len(x)):
    #    if weights[i] != 0:
    #        l += weights[i] * self.loss(x[i], y[0])

`

I changed the code as i thought, using deep_verision.py as in nnUnet,but still can not get correct loss decline speed.

Thanks !!!!!!!!!!!!!!!!