Closed haobabuhaoba closed 4 years ago
@haobabuhaoha. To be more specific, In the paper, trip. attention layers should be at end of each bottleneck block ( 4 in total for Resnet-50 training). Please correct me if I'm wrong.
@Eldad27 To clarify, Triplet Attention (similar to Squeeze and Excitation) is added to the output of the final convolution layer, right before the residual connection, of each bottleneck block in each of the 4 layers in a ResNet-50. A ResNet-50 contains 4 layers, each of which contains multiple Bottleneck Blocks.
Hi, triplet attention is a plug and play module and is dataset independent. In any model architecture for any dataset, you can add triplet attention module after the final convolution layer in the block. Hopefully, this clarified your doubt, Feel free to reopen the issue if there is any further concern. Thanks!