houqb / CoordAttention

Code for our CVPR2021 paper coordinate attention
MIT License
1.02k stars 122 forks source link

Whether the function of the modified code is the same as that of your original code? #38

Open Shirugo opened 2 years ago

Shirugo commented 2 years ago

Hello Dear Author: Thank you for sharing your great projects and contributing your valuable creative points. I am a student of medical image classification, and I want to learn from your innovations and apply them to the densenet network. After adding CA attention, I got this error: TypeError: init() missing 1 required positional argument: ' oup'; so I changed the two parameters inp, oup of your code, as shown in the following code. Not sure if such a change has made a change in implementing your original functionality? Or, do you have any experimental cases of applying CA attention to other networks such as ResNet networks? Sincerely look forward to your reply and wish you a happy life!

def init(self, inp, oup, reduction=32):

   def __init__(self,num_channels,reduction=32):
    super(CoordAtt, self).__init__()
    self.pool_h = nn.AdaptiveAvgPool2d((None, 1))
    self.pool_w = nn.AdaptiveAvgPool2d((1, None))

    #mip = max(8, inp // reduction)
    mip=max(8,num_channels//reduction)

    #sh self.conv1 = nn.Conv2d(inp, mip, kernel_size=1, stride=1, padding=0)
    self.conv1=nn.Conv2d(num_channels,mip,kernel_size=1,stride=1,padding=0)
    self.bn1 = nn.BatchNorm2d(mip)
    self.act = h_swish()

    #sh self.conv_h = nn.Conv2d(mip, oup, kernel_size=1, stride=1, padding=0)
    #sh self.conv_w = nn.Conv2d(mip, oup, kernel_size=1, stride=1, padding=0)
    self.conv_h = nn.Conv2d(mip, num_channels, kernel_size=1, stride=1, padding=0)
    self.conv_w = nn.Conv2d(mip, num_channels, kernel_size=1, stride=1, padding=0)