LIVIAETS / boundary-loss

Official code for "Boundary loss for highly unbalanced segmentation", runner-up for best paper award at MIDL 2019. Extended version in MedIA, volume 67, January 2021.
https://doi.org/10.1016/j.media.2020.101851
MIT License
655 stars 98 forks source link

error in SurfaceLoss? #13

Closed FabianIsensee closed 5 years ago

FabianIsensee commented 5 years ago

Hi there, the signature of the call method for the SurfaceLoss looks like this:

https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/losses.py#L80

meaning that the loss expects the arguments to be in the following order: softmax_probabilities, distance_maps, anything However, all losses in main.py are called with

https://github.com/LIVIAETS/surface-loss/blob/108bd9892adca476e6cdf424124bc6268707498e/main.py#L101

where the order is softmax_probabilities, labels, distance_maps. The latter is consistent with the other losses in losses.py. Could it be that there is a mistake? Thank you very much for sharing your code :-) Best, Fabian

HKervadec commented 5 years ago

Hey there,

The code is a little bit more convoluted that needed, because I used the same codebase for my other works (namely https://github.com/LIVIAETS/SizeLoss_WSS, but you can checkout the other repo in the organization). Which means I have to handle of following cases:

Right now all my losses use either the label or the bounds, but that might change in the future so I kept functions that take the two as an input.

In the boundary loss paper, the distance map are labels transformed in a different way:

    gt_transform = transforms.Compose([
        lambda img: np.array(img)[np.newaxis, ...],
        lambda nd: torch.tensor(nd, dtype=torch.int64),
        partial(class2one_hot, C=n_class),
        itemgetter(0)
    ])
    dist_map_transform = transforms.Compose([
        lambda img: np.array(img)[np.newaxis, ...],
        lambda nd: torch.tensor(nd, dtype=torch.int64),
        partial(class2one_hot, C=n_class),
        itemgetter(0),
        lambda t: t.cpu().numpy(),
        one_hot2dist,
        lambda nd: torch.tensor(nd, dtype=torch.float32)
    ])

(you will notice that the distmap transform is a continuation of the gt_transform)

the data folders are defined like this:

B_DATA = [('in_npy', torch.tensor, False), ('gt_npy', gt_transform, True)]

results/wmh/gdl_surface_steal: DATA = --folders="$(B_DATA)+[('gt_npy', gt_transform, True), \
         ('gt_npy', dist_map_transform, False)]" 

which gives 4 tensors as output of the dataloader:

Then, the losses are defined this way

results/wmh/gdl_surface_steal: OPT = --losses="[('GeneralizedDice', {'idc': [0, 1]}, None, None, None, 1), \
    ('SurfaceLoss', {'idc': [1]}, None, None, None, 0.01)]"

The three consecutive None correspond to the options for the bounds:

You can find one working example here https://github.com/LIVIAETS/SizeLoss_WSS/blob/master/acdc.make#L149 if you are interested.

So, to summarize, there is no mistake, as we end up with two losses:

Both of them ignore the provided bounds.

Let me know if some part was not clear,

Hoel