PeterJackNaylor / DRFNS

This repository contains the code necessary in order to reproduce the work contained in the submitted paper: "Segmentation of Nuclei in Histopathology Images by deep regression of the distance map".
MIT License
47 stars 13 forks source link

Regression distance network #2

Closed John1231983 closed 5 years ago

John1231983 commented 5 years ago

Hello, Could I ask you three question? First, you concluded in the paper that

We notice that only the unnormalized f1 regression seems to learn the distance map reasonably well

Does it mean that regression on real distance value (unnormalized) will be better than on normalized distance value [0,1]?

Second, how do you provide a pixel-wise regression output? I mean the distance map prediction image that has the same size with the input image. Have you used convolution 1x1 instead of the linear layer?

Third, do you need a softmax function before the last regression layer? Because the pixel-wise regression may provide a negative value for distance (I assumed), while the distance map value must be positive?

Thanks

PeterJackNaylor commented 5 years ago

Hello, Of course.

Does it mean that regression on real distance value (unnormalized) will be better than on normalized distance value [0,1]?

Yes, another way of thinking about it is with a small and a big cell. Even if they share the output values (as it is normalized), the network needs to learn to map these small values accross small and big cells. If it is unnormalized he does not have such a constraint, thus it is easier for him to learn.

Second, how do you provide a pixel-wise regression output? I mean the distance map prediction image that has the same size with the input image. Have you used convolution 1x1 instead of the linear layer?

Yes we use a U-net architecture and at the end map with convolution 1x1. We do not use any linear layers.

Third, do you need a softmax function before the last regression layer? Because the pixel-wise regression may provide a negative value for distance (I assumed), while the distance map value must be positive?

No need to add a softmax, the softmax would constrain the output to [0;1] and we allow the output to take any value in the real space (even negativ values). If you like you could add a relu layer to force positive outputs, but it isn't necessary.

Best,

John1231983 commented 5 years ago

Thanks for your detail. It is clear now. I think if you add relu at the end, the MSE (mean square error ) between prediction and true value will reduce although it does not help the network to learn better (because relu does not learn anything)