Open AvrahamRaviv opened 4 weeks ago
I Implemented it for my thesis. Usually Intrinsically the PixelShuffle is never pruned, it only expect an input depth as such I bundled the PixelShuffle operator together with the sub-pixel convolution usually putted before it for the SR operation. At this point the operation becomes the following: which are the output channel to prune in the conv layer given an input channel. You can find my implementation here https://github.com/MaGiiK02/sr_structured_pruning/blob/main/pruners/UpsamplePruner.py where the Upsample block represent the standard Upsample operation
@angelinimattia Wow, thanks for the reply! It’s great to see others have tackled this implementation. I’ll be sure to integrate it into my code soon.
@AvrahamRaviv Feel free to ask me until I remember what I did, since I submitted my master thesis few weeks ago.
Hi, torch.nn.PixelShuffle is change the number of channels, which is not treated well using torch pruning. For example, I have layer with shape of 1165761024, and by using PixelShuffle the output is 1411522048. Using torch pruning, dep graph think it is layer with same output channels, which cause to two errors: