LeeJunHyun / Image_Segmentation

Pytorch implementation of U-Net, R2U-Net, Attention U-Net, and Attention R2U-Net.
2.68k stars 595 forks source link

About the input of Attention Gate #14

Closed lHou157 closed 5 years ago

lHou157 commented 5 years ago

Thanks for your sharing! But there is a question that I think g=d5 should be written as g=x5 as shown in the following figure. I don't know whether the way I understand is right or not. If not, please point out! 1554994303(1) 1554994628(1)

LeeJunHyun commented 5 years ago

Hi, @lHou157 .

Because the spatial size (resolution) of x5 is different with x4, x5 should be upsampled to be used as gating signal. In the attention gate block, the gating signal g and the input signal x are summed after 1x1 conv. So I used d5 as gating signal. But if you have better idea, this can be implemented with a nice way.

Thank you :)

lHou157 commented 5 years ago

Thank you! I have caught your meanings. You are really a nice one!

LeeJunHyun commented 5 years ago

Thank you! If you have more issues, plz leave a comment whenever :)

sudohainguyen commented 3 years ago

Hi @LeeJunHyun , In the original paper, it is written that x is down-sampled within the attention module as described in this pic instead of upsampling the gate as you have been doing. How do you think?