aaron-xichen / pytorch-playground

Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
MIT License
2.6k stars 607 forks source link

Quantizing VGG-16 #20

Closed indrajitsg closed 4 years ago

indrajitsg commented 6 years ago

I tried quantizing a VGG-16 network but the size of the network hasn't changed. I loaded a VGG-16 model from torchvision and ran a portion of the quantize.py (lines 48 - 79) which does the quantization. When I saved the model - the size had increased slightly.

Can you please tell me what I am doing incorrectly?

AkashGanesan commented 5 years ago

The quantizers don't really change the type of the model. They only perform quantization in that they emulate what would really happen if you have only n-bits. So, the sizes should remain roughly the same. I think the quantization operation may actually increase the size of the model given that you now compute them via a forward pass and this logic is used to quantize the inputs. Of course, it would be very helpful if the author can clarify.

aaron-xichen commented 4 years ago

thanks @AkashGanesan