I'm trying to apply Channelwise Pruning. There persists few issues,
However, a few layers were not able to be pruned; the list of layers is printed out via the try and except call!
While loading the pruned model, the pruned model does not seem to be holding the same weights as the original model [code not shown here]. What is the best practice to make sure the unpruned weights get carried over?
The major problem persists in conv group pruning where:
mbv2_model = copy.deepcopy(mobilenet_v2)
import torch_pruning as tp
for layer in conv_list:
DG = tp.DependencyGraph().build_dependency(mbv2_model, example_inputs=torch.randn(1,3,32,32))
# 2. Group coupled layers for model.conv1
layer_for_pruning = eval(layer.replace('model','mbv2_model'))
#Randonly prune few layers
group = DG.get_pruning_group( layer_for_pruning , tp.prune_conv_out_channels, idxs=torch.randint(low=0, high=10, size=(5,)))
# 3. Prune grouped layers altogether
if DG.check_pruning_group(group): # avoid full pruning, i.e., channels=0.
group.prune()
_ = mbv2_model(torch.randn(1,3,32,32))
print(layer)
# 4. Save & Load
model.zero_grad() # clear gradients
torch.save(model, 'model.pth') # We can not use .state_dict as the model structure is changed.
model = torch.load('model.pth')
RuntimeError: Given groups=27, expected weight to be divisible by 27 at dimension 0, but got weight of size [[32, 1, 3, 3]] instead
I'm trying to apply Channelwise Pruning. There persists few issues,
While loading the pruned model, the pruned model does not seem to be holding the same weights as the original model [code not shown here]. What is the best practice to make sure the unpruned weights get carried over?
mobilenet_v3 = torch.hub.load('pytorch/vision:v0.10.0', 'mobilenet_v3_small', pretrained=True) pruned_model = copy.deepcopy(mobilenet_v3) DG = tp.DependencyGraph().build_dependency(pruned_model, example_inputs=test_input)
for group in DG.get_all_groups(ignored_layers=[], root_module_types=[nn.Conv2d, nn.Linear]):
The major problem persists in conv group pruning where: