submission2019 / cnn-quantization

Quantization of Convolutional Neural networks.
239 stars 60 forks source link
convolutional-neural-networks gemmlowp-quantization quantization

cnn-quantization

Dependencies

HW requirements

NVIDIA GPU / cuda support

Data

Prepare environment

Building cuda kernels for GEMMLOWP

To improve performance GEMMLOWP quantization was implemented in cuda and requires to compile kernels.

Run inference experiments

Post-training quantization of Res50

Note that accuracy results could have 0.5% variance due to data shuffling.

experiments

ACIQ: Analytical Clipping for Integer Quantization

We solve eq. 6 numerically to find optimal clipping value α for both Laplace and Gaussian prior.
eq-6

Solving eq. 6 numerically for bit-widths 2,3,4 results with optimal clipping values of 2.83b, 3.86b, 5.03*b respectively, where b is deviation from expected value of the activation.

Numerical solution source code: mse_analysis.py aciq-mse

Per-channel bit allocation

Given a quota on the total number of bits allowed to be written to memory, the optimal bit width assignment Mi for channel i according to eq. 11.
eq-11
bit_allocation_synthetic.py
bit-alloc

Bias correction

We observe an inherent bias in the mean and the variance of the weight values following their quantization.
bias_correction.ipynb
bias-err
We calculate this bias using equation 12.
eq-12
Then, we compensate for the bias for each channel of W as follows:
eq-13

Quantization

We use GEMMLOWP quantization scheme described here. We implemented above quantization scheme in pytorch. We optimize this scheme by applying ACIQ to reduce range and optimally allocate bits for each channel.

Quantization code can be found in int_quantizer.py

Additional use cases and experiments

Inference using offline statistics

Collect statistics on 32 images

python inference/inference_sim.py -a resnet50 -b 1 --qtype int8 -sm collect -ac -cs 32

Run inference experiment W4A4 + ACIQ + Bit Alloc(A) + Bit Alloc(W) + Bias correction using offline statistics.

python inference/inference_sim.py -a resnet50 -b 512 -pcq_w -pcq_a --qtype int4 -qw int4 -c laplace -baa -baw -bcw -sm use
  • Prec@1 74.2 Prec@5 91.932

4-bit quantization with clipping thresholds of 2 std

python inference/inference_sim.py -a resnet50 -b 512 -pcq_w -pcq_a -sh --qtype int4 -c 2std
  • Prec@1 15.440 Prec@5 34.646

ACIQ with layer wise quantization

python inference/inference_sim.py -a resnet50 -b 512 --qtype int4 -c laplace -sm use
  • Prec@1 71.404 Prec@5 90.248

Bin allocation and Variable Length Codding

Given a quota on the total number of bits allowed to be written to memory, the optimal number of bins Bi for channel i derived from eq. 10.
eq-10

We evaluate the effect of huffman codding on activations and weights by mesuaring average entropy on all layers.

python -a vgg16 -b 32 --device_ids 4 -pcq_w -pcq_a -sh --qtype int4 -qw int4 -c laplace -baa -baw -bcw -bata 5.3 -batw 5.3 -mtq -me -ss 1024
  • Prec@1 70.801 Prec@5 91.211

Average bit rate: avg.entropy.act - 2.215521374096473