aaron-xichen / pytorch-playground

Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
MIT License
2.62k stars 612 forks source link

The weight of conv is not quantized, right? #29

Closed yongchaoding closed 4 years ago

yongchaoding commented 5 years ago

From utee/quant.py, I can only find the process of adding quantized layer between different layer. But when it comes to quantized weight of conv, I can not find it. So I wonder why you do not quant weight of conv, because it is importance in process of quantification.

JensenHJS commented 5 years ago

when I run python quantize.py --type cifar10 --quant_method linear --param_bits 8 --fwd_bits 8 --bn_bits 8 --gpu 0. It prints Traceback (most recent call last): File "quantize.py", line 64, in sf = bits - 1. - quant.compute_integral_part(v, overflow_rate=args.overflow_rate) File "/home/hjs/pytorch-playground-master/utee/quant.py", line 14, in compute_integral_part v = v.data.cpu().numpy()[0] IndexError: too many indices for array I am looking for your help 救救可怜的孩子吧,代码没跑通,进行不下去

bezoldbrucke commented 4 years ago

weights are quantized to integers within a certain range defined by the bit-depth (args.param_bits) before being de-quantized to float. Layers are added to quantize activation.

aaron-xichen commented 4 years ago

thanks @bezoldbrucke