xmindflow / deformableLKA

[WACV 2024] Beyond Self-Attention: Deformable Large Kernel Attention for Medical Image Segmentation
https://arxiv.org/abs/2309.00121
157 stars 12 forks source link

Dear author, where is your complete 3D frame model in the file? Can you help me sort out the running thread of your 3D variable convolution? #4

Closed liaochuanlin closed 8 months ago

Leonngm commented 8 months ago

Hi,

the network architecture for the Synapse dataset is in this file:

https://github.com/xmindflow/deformableLKA/blob/main/3D/d_lka_former/network_architecture/synapse/d_lka_former_synapse.py#L8

The transformer block with the attention mechanism is located in this file:

https://github.com/xmindflow/deformableLKA/blob/main/3D/d_lka_former/network_architecture/synapse/transformerblock.py#L570

Please note, that there are plenty of attention mechanisms included. These can be selected with the argument --trans_block in the train bash script

Let me know if this clarifies the issue.