Open Maartendrinhuyzen opened 1 month ago
Spatial attention-based networks often benefit from incorporating edge/shape information to improve the reconstruction of deep semantic features into a high-resolution segmentation map. For instance, Karthik et al. (2022) use an attention module to capture contextual information from the contour feature maps in its spatial neighborhood. Wei et al. (2021a) use deep features to filter out background noise in shallow features and preserve edge information. Li et al. (2020d) use the left atrial boundary as an attention mask on scar features to perform shape attention. Qin et al. (2020) propose an attention distillation technique to pass fine-grained details down to lower-resolution attention maps, improving their performance.
we currently have decoder attention. https://ieeexplore.ieee.org/document/9741336 https://napier-repository.worktribe.com/preview/2885434/IEEE_JBHI.pdf
In the human visual cognition system, we are naturally skilled at focusing on the area of interest and ignoring the interference of other background information, which helps us to identify and judge more accurately and efficiently. Imitating this, attention mechanisms are proposed to adaptively assign weights to different regions in an image, enabling neural networks to focus on the important regions related to the target task arXiv:2305.17937v1 [eess.IV] 29 May 2023 2 Yutong Xie et al. / Preprint (2023) and disregard irrelevant areas