landskape-ai / triplet-attention

Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
https://openaccess.thecvf.com/content/WACV2021/html/Misra_Rotate_to_Attend_Convolutional_Triplet_Attention_Module_WACV_2021_paper.html
MIT License
406 stars 49 forks source link

Triplet Attention (k = 7) #4

Closed ZJUTSong closed 4 years ago

ZJUTSong commented 4 years ago

Triplet Attention (k = 7), what is the parameter ‘k’ means ?

digantamisra98 commented 4 years ago

Hello. k defines the kernel size for the convolution layers present in each of the three branches of Triplet Attention.