LIVIAETS / boundary-loss

Official code for "Boundary loss for highly unbalanced segmentation", runner-up for best paper award at MIDL 2019. Extended version in MedIA, volume 67, January 2021.
https://doi.org/10.1016/j.media.2020.101851
MIT License
654 stars 98 forks source link

Help. Is this loss function the same with yours. Thanks very much. #24

Closed YanqingWu closed 4 years ago

YanqingWu commented 4 years ago

I mean is the same idea but not same implementation? please refer: https://github.com/yiskw713/boundary_loss_for_remote_sensing.

class BDLoss(nn.Module):
    """Boundary Loss proposed in:
    Alexey Bokhovkin et al., Boundary Loss for Remote Sensing Imagery Semantic Segmentation
    https://arxiv.org/abs/1905.07852
    """

    def __init__(self, theta0=3, theta=5):
        super().__init__()

        self.theta0 = theta0
        self.theta = theta

    def forward(self, pred, gt):
        """
        Input:
            - pred: the output from model (before softmax)
                    shape (N, C, H, W)
            - gt: ground truth map
                    shape (N, H, w)
        Return:
            - boundary loss, averaged over mini-bathc
        """

        n, c, _, _ = pred.shape

        # softmax so that predicted map can be distributed in [0, 1]
        pred = torch.softmax(pred, dim=1)

        # one-hot vector of ground truth
        one_hot_gt = one_hot(gt, c)

        # boundary map
        gt_b = F.max_pool2d(1 - one_hot_gt, kernel_size=self.theta0, stride=1, padding=(self.theta0 - 1) // 2)
        gt_b -= 1 - one_hot_gt

        pred_b = F.max_pool2d(1 - pred, kernel_size=self.theta0, stride=1, padding=(self.theta0 - 1) // 2)
        pred_b -= 1 - pred

        # extended boundary map
        gt_b_ext = F.max_pool2d(gt_b, kernel_size=self.theta, stride=1, padding=(self.theta - 1) // 2)

        pred_b_ext = F.max_pool2d(pred_b, kernel_size=self.theta, stride=1, padding=(self.theta - 1) // 2)

        # reshape
        gt_b = gt_b.view(n, c, -1)
        pred_b = pred_b.view(n, c, -1)
        gt_b_ext = gt_b_ext.view(n, c, -1)
        pred_b_ext = pred_b_ext.view(n, c, -1)

        # Precision, Recall
        P = torch.sum(pred_b * gt_b_ext, dim=2) / (torch.sum(pred_b, dim=2) + 1e-7)
        R = torch.sum(pred_b_ext * gt_b, dim=2) / (torch.sum(gt_b, dim=2) + 1e-7)

        # Boundary F1 Score
        BF1 = 2 * P * R / (P + R + 1e-7)

        # summing BF1 Score for each class and average over mini-batch
        loss = torch.mean(1 - BF1)

        return loss
HKervadec commented 4 years ago

Hey,

I had a quick glance at the paper you linked (not much time those days to read it in details).

They also want to use the boundary information, but the similarity kinda stops here.

In their methods, they basically want to maximimize the DSC score over the boundary area only (we, in our paper, want to minimize the distance between the two boundaries).

It requires to compute the predicted boundary, in a differentiable fashion. Which is actually quite simple to approximate (you can do that with hard-coded convolutions and pooling, takes about 3 lines of code).

The goal of our method (computing the distance of the two boundaries) not only requires to compute the boundary, but then compute the distance for each pixel (that is the really tricky part). This is our contribution: showing that we can do that with a simple pixel-wise multiplication.

Their paper reminds me of something a bit similar I saw at ISBI recently: https://www.researchgate.net/publication/339457295_Learning_a_Loss_Function_for_Segmentation_A_Feasibility_Study In this one, the authors took a different approach, by learning that loss function with a smaller neural net. But at the end of the day, I think those two papers actually try to optimize the same thing (DSC over the boundary area), with a different approach. (it would be cool to compare those two)

YanqingWu commented 4 years ago

@HKervadec thanks for your explanation.