landskape-ai / triplet-attention

Official PyTorch Implementation for "Rotate to Attend: Convolutional Triplet Attention Module." [WACV 2021]
https://openaccess.thecvf.com/content/WACV2021/html/Misra_Rotate_to_Attend_Convolutional_Triplet_Attention_Module_WACV_2021_paper.html
MIT License
406 stars 49 forks source link

How to train my own dataset? #3

Closed haobabuhaoba closed 4 years ago

digantamisra98 commented 4 years ago

Hi, triplet attention is a plug and play module and is dataset independent. In any model architecture for any dataset, you can add triplet attention module after the final convolution layer in the block. Hopefully, this clarified your doubt, Feel free to reopen the issue if there is any further concern. Thanks!

Eldad27 commented 3 years ago

@haobabuhaoha. To be more specific, In the paper, trip. attention layers should be at end of each bottleneck block ( 4 in total for Resnet-50 training). Please correct me if I'm wrong.

digantamisra98 commented 3 years ago

@Eldad27 To clarify, Triplet Attention (similar to Squeeze and Excitation) is added to the output of the final convolution layer, right before the residual connection, of each bottleneck block in each of the 4 layers in a ResNet-50. A ResNet-50 contains 4 layers, each of which contains multiple Bottleneck Blocks.