Closed hiyyg closed 3 years ago
and AdaptivePool?
@hiyyg We have not tested if SoftPool can be used as a Global Pooling operation. You could try this however yourself as the results would definitely be interesting! In the paper, we mainly focused on sub-sampling activations along spatial/spatio-temporal dimensions. Activation vectorisation through Global pooling operations is a bit beyond that scope.
@shuaizzZ You can do adaptive pooling operations in your model's forward()
or even create your own custom class. A very simplified version would be:
class AdaptiveSoftPoolxd:
def __init__(self,output_size):
self.out_size = output_size
def forward(self,x)
# Get size
inp_shape = list(x.size())
# Stride
stride = (inp_shape[2]//self.out_size)
# Kernel
kernel = inp_shape[2] - (self.out_size-1)*stride
# Apply pooling
x = soft_pool2d(x,kernel_size=kernel,stride=stride)
return x
@hiyyg We have not tested if SoftPool can be used as a Global Pooling operation. You could try this however yourself as the results would definitely be interesting! In the paper, we mainly focused on sub-sampling activations along spatial/spatio-temporal dimensions. Activation vectorisation through Global pooling operations is a bit beyond that scope.
@shuaizzZ You can do adaptive pooling operations in your model's
forward()
or even create your own custom class. A very simplified version would be:class AdaptiveSoftPoolxd: def __init__(self,output_size): self.out_size = output_size def forward(self,x) # Get size inp_shape = list(x.size()) # Stride stride = (inp_shape[2]//self.out_size) # Kernel kernel = inp_shape[2] - (self.out_size-1)*stride # Apply pooling x = soft_pool2d(x,kernel_size=kernel,stride=stride) return x
hi I want to use AdaptiveSoftPoolxd but not global average pooling, and I have some error: TypeError: 'AdaptiveSoftPoolxd' object is not callable
the AdaptiveSoftPoolxd is :
# AdaptiveSoftPoolxd
class AdaptiveSoftPoolxd:
def __init__(self, output_size):
self.out_size = output_size
def forward(self, x):
# Get size
inp_shape = list(x.size())
# Stride
stride = (inp_shape[2] // self.out_size)
# Kernel
kernel = inp_shape[2] - (self.out_size - 1) * stride
# Apply pooling
x = soft_pool2d(x, kernel_size=kernel, stride=stride)
return x
and the SE is :
# SE attention
class SE(nn.Module):
def __init__(self, in_channels, channels, se_ratio=12):
super(SE, self).__init__()
# self.avg_pool = nn.AdaptiveAvgPool2d(1) # Avgpool
# self.avg_pool = SoftPool2d(kernel_size=(2,2), stride=(2,2)) # softpool
self.avg_pool = AdaptiveSoftPoolxd(1) # AdaptiveSoftPoolxd
self.fc = nn.Sequential(
nn.Conv2d(in_channels, channels // se_ratio, kernel_size=1, padding=0),
nn.BatchNorm2d(channels // se_ratio),
nn.ReLU(inplace=True),
nn.Conv2d(channels // se_ratio, channels, kernel_size=1, padding=0),
nn.Sigmoid()
)
def forward(self, x):
print('x is {}'.format(x))
y = self.avg_pool(x)
y = self.fc(y)
return x * y
anything is wrong??
why I can't call AdaptiveSoftPoolxd
@cendelian I guess you forget to inherit torch.nn.Module when defining AdaptiveSoftPoolxd:
class AdaptiveSoftPoolxd:
should be
class AdaptiveSoftPoolxd(nn.Module):
def __init__(self, output_size):
super().__init__()
...
@cendelian I guess you forget to inherit torch.nn.Module when defining AdaptiveSoftPoolxd:
class AdaptiveSoftPoolxd:
should beclass AdaptiveSoftPoolxd(nn.Module): def __init__(self, output_size): super().__init__() ...
thanks But I found another error: THCudaCheck FAIL file=/pytorch/aten/src/THC/THCCachingHostAllocator.cpp line=278 error=700 : an illegal memory access was encountered
class AdaptiveSoftPoolxd(nn.Module):
def __init__(self, output_size):
super(AdaptiveSoftPoolxd, self).__init__()
self.out_size = output_size
def forward(self, x):
# Get size
inp_shape = list(x.size())
# Stride
stride = (inp_shape[2] // self.out_size)
# Kernel
kernel = inp_shape[2] - (self.out_size - 1) * stride
# Apply pooling
x = soft_pool2d(x, kernel_size=kernel, stride=stride)
return x
Can soft pool be used as global soft pooling which can replace global average pooling?