nabsabraham / focal-tversky-unet

This repo contains the code for our paper "A novel focal Tversky loss function and improved Attention U-Net for lesion segmentation" accepted at IEEE ISBI 2019.
357 stars 72 forks source link

Gating Signal before Convolution #31

Open ghost opened 2 years ago

ghost commented 2 years ago

Hey,

I was working through the paper and the code together. In the Paper in Figure 2, at each level the output of the convolution is passed to the next up convolution and is used for the gating signal. However, in the code for the first upconvolution this is consistent (using center, which is output of the convolutional block). But for the next levels in the expanding path, there is no 3x3 convolutions applied for the input for the next level and the gating signal, but just previous concatenated "attn" and "up".

But for the output at each level, covolutional blocks are applied (not just a single 3x3 convolution): conv6 = UnetConv2D(up1, 256, is_batchnorm=True, name='conv6') conv7 = UnetConv2D(up2, 128, is_batchnorm=True, name='conv7') conv8 = UnetConv2D(up3, 64, is_batchnorm=True, name='conv8') conv9 = UnetConv2D(up4, 32, is_batchnorm=True, name='conv9')

So, the implementation seems not to be consistent with the Figure for me. I'm totally new to attention networks, so I'm very glad about any help to understand this architecture.

Thanks :)