IntelLabs / distiller

Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Apache License 2.0
4.34k stars 799 forks source link

--load-serialized will make model fail to prune #564

Open Little0o0 opened 2 years ago

Little0o0 commented 2 years ago

I found model without DataParallel wrapping it will fail to prune. i.e. --load-serialized will disable pruning.

When I run

python compress_classifier.py -a=resnet20_cifar -p=50 ../../../data/cifar10/ -j=22 --epochs=1 --lr=0.001 --masks-sparsity --compress=../agp-pruning/resnet18.schedule_agp.yaml --load-serialized

The total sparsity will always be 0.00

Total sparsity: 0.00

But if I run the same command line without --load-serialized

python compress_classifier.py -a=resnet20_cifar -p=50 ../../../data/cifar10/ -j=22 --epochs=1 --lr=0.001 --masks-sparsity --compress=../agp-pruning/resnet18.schedule_agp.yaml

The total sparsity will be 1.53 after 1 epoch

Total sparsity: 1.53
Little0o0 commented 2 years ago

I found model = torch.nn.DataParallel(model, device_ids=device_ids) is necessary for pruning but I do not know the reason.