Eric-mingjie / network-slimming

Network Slimming (Pytorch) (ICCV 2017)
MIT License
907 stars 214 forks source link

Why not stop gradient for channel_selection layer's parameters? #45

Open yzlyty opened 4 years ago

yzlyty commented 4 years ago
class channel_selection(nn.Module):
   def __init__(self, num_channels):

         super(channel_selection, self).__init__()

         self.indexes = nn.Parameter(torch.ones(num_channels))

should this be : self.indexes = nn.Parameter(torch.ones(num_channels), requires_grad=False)?

Eric-mingjie commented 4 years ago

It would work also. But I think even if we does not set requires_grad=False, then the gradient would still be None. Therefore it does not get updated during back-propogation.