Junelin2333 / LanGuideMedSeg-MICCAI2023

Pytorch code of MICCAI 2023 Paper-Ariadne’s Thread : Using Text Prompts to Improve Segmentation of Infected Areas from Chest X-ray images
GNU General Public License v3.0
32 stars 1 forks source link

Regarding the implementation of self and cross-attention #5

Open xiaopengguo opened 10 months ago

xiaopengguo commented 10 months ago

https://github.com/Junelin2333/LanGuideMedSeg-MICCAI2023/blob/c96a272bc1b49c55b27696ebdeb7a6e93ac62e29/utils/layers.py#L70C8-L81C46

I'm curious about the insights behind adding positional embedding to the q and k, but not to the v in both self and cross-attention; is the positional embedding added in each attention block, and if so, why? Looking forward to further insights, and thank you in advance!