Closed YanqingWu closed 4 years ago
Hey,
I had a quick glance at the paper you linked (not much time those days to read it in details).
They also want to use the boundary information, but the similarity kinda stops here.
In their methods, they basically want to maximimize the DSC score over the boundary area only (we, in our paper, want to minimize the distance between the two boundaries).
It requires to compute the predicted boundary, in a differentiable fashion. Which is actually quite simple to approximate (you can do that with hard-coded convolutions and pooling, takes about 3 lines of code).
The goal of our method (computing the distance of the two boundaries) not only requires to compute the boundary, but then compute the distance for each pixel (that is the really tricky part). This is our contribution: showing that we can do that with a simple pixel-wise multiplication.
Their paper reminds me of something a bit similar I saw at ISBI recently: https://www.researchgate.net/publication/339457295_Learning_a_Loss_Function_for_Segmentation_A_Feasibility_Study In this one, the authors took a different approach, by learning that loss function with a smaller neural net. But at the end of the day, I think those two papers actually try to optimize the same thing (DSC over the boundary area), with a different approach. (it would be cool to compare those two)
@HKervadec thanks for your explanation.
I mean is the same idea but not same implementation? please refer: https://github.com/yiskw713/boundary_loss_for_remote_sensing.