ziplab / QTool

Collections of model quantization algorithms. Any issues, please contact Peng Chen (blueardour@gmail.com)
Other
68 stars 16 forks source link

Biases and BatchNorm not quantized as described in "AQD: Towards Accurate Quantized Object Detection" #1

Closed blueardour closed 3 years ago

blueardour commented 3 years ago

Rebasing the repo:

Import issuses from old url:

ShechemKS:

After reading the paper "AQD: Towards Accurate Quantized Object Detection", I have been using this repo to quantize an object detector. After reading the code, I realized that the biases of the convolutions (if it has biases) and batch normalization is not quantized. However, the paper "AQD: Towards Accurate Quantized Object Detection" states

We propose an Accurate Quantized object Detection (AQD) method to fully get rid of floating-point computation in each layer of the network, including convolutional layers, normalization layers and skip connections.

Specifically, I cannot find the code that corresponds to the equations given in section 3.2.2 of the paper. Am I missing something? How does that work in the code? Am I not using the correct keywords? (I have used the default ones provided: keyword: ["debug", "dorefa", "lsq"]). The biases don't seem to be quantized either.

Additionally, in the default configurations, the weights are quantized using the adaptive mode var-mean (i.e. the weights are normalized before being quantized, to my understanding). Is this also part of the method adopted in the paper, or should I disable this if I am to replicate those results?

blueardour commented 3 years ago

Re-open if there is further issue.

Joejwu commented 2 years ago

So these problems are solved?