gorkemcanates / Dual-Cross-Attention

Official Pytorch implementation of Dual Cross-Attention for Medical Image Segmentation
MIT License
99 stars 11 forks source link

Dual Cross-Attention #11

Open quxianjiuguo opened 9 months ago

quxianjiuguo commented 9 months ago

First, give a thumbs up to your work. But I have a question. The paper mentions decomposing cross attention into space and channels. What is the difference between these two and why is it called space and channel. The code only shows that the objects used for self attention calculation are different between the two. This seems to have nothing to do with space and channels.

gorkemcanates commented 3 months ago

The shapes of the embedded patches for attention score generation determine the difference. Please check sections 2.2 and 2.3 in the paper.