he-y / soft-filter-pruning

Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
https://arxiv.org/abs/1808.06866
374 stars 74 forks source link

Why the zeroed channels still receive gradients? #36

Open AlexSunNik opened 11 months ago

AlexSunNik commented 11 months ago

Theoretically speaking, when you prune the channels according to the output dimension, you shouldn't get any gradient for the corresponding weights during your backward pass. How do you solve this from the code? Could you point me to the corresponding section?

I do notice that BN layers are not masked. If you keep the BN bias, there will indeed be gradients through the BN bias, but this seems like a very hacky workaround.

lzd19981105 commented 1 month ago

When i use the scripts to prune the resnet20 on cifar10, the weights that are pruned do not receive gradients backward. That means the weights cannot be reconstructed. And also the BN bias. BN bias is set 0 at the begining, so the BN bias donot affect the output channel value when this channel is masked as zero. This code cannot work as the paper it came from.