luuuyi / CBAM.PyTorch

Non-official implement of Paper:CBAM: Convolutional Block Attention Module
1.33k stars 286 forks source link

fc or conv in file resnet_cbam.py, line 31 and line 33? #13

Open huqiaoping opened 4 years ago

huqiaoping commented 4 years ago

For the file resnet_cbam.py, I think line31 and line 33 are not consistent with the paper. fc1 and fc2 should be nn.Linear because the paper said:

Both descriptors are then forwarded to a shared network to produce our channel attention map Mc 2 RC11. The shared network is composed of multi-layer perceptron (MLP) with one hidden layer. To reduce parameter overhead, the hidden activation size is set to RC=r11, where r is the reduction ratio.

May I know why you use conv instead of Linear ?

THUGAF commented 3 years ago

For the file resnet_cbam.py, I think line31 and line 33 are not consistent with the paper. fc1 and fc2 should be nn.Linear because the paper said:

Both descriptors are then forwarded to a shared network to produce our channel attention map Mc 2 RC�1�1. The shared network is composed of multi-layer perceptron (MLP) with one hidden layer. To reduce parameter overhead, the hidden activation size is set to RC=r�1�1, where r is the reduction ratio.

May I know why you use conv instead of Linear ?

1x1 Convs are often used instead of Linears to reduce the number of parameters.