Peachypie98 / CBAM

CBAM: Convolutional Block Attention Module for CIFAR100 on VGG19
19 stars 7 forks source link

Using shared MLP? #3

Closed Dariushuangg closed 8 months ago

Dariushuangg commented 8 months ago

Hi, thanks for sharing your implementation. I noticed that in your CAM class, you are using two separate networks self.linear_max and self.linear_avg to process max pooled feature and average pooled feature. This differs from other implementations and the original paper, which said:

Both descriptors are then forwarded to a shared network to produce our channel attention map.

Do you have a specific reason for this design? or is this an oversight? Thanks!

Peachypie98 commented 8 months ago

Thank you for the issue. It appears to be an oversight on my part. I will promptly implement the necessary changes and provide an updated benchmark using the VGG model.

Peachypie98 commented 8 months ago

Code has been updated and I will close this issue.