After reading the paper "AQD: Towards Accurate Quantized Object Detection", I have been using this repo to quantize an object detector. After reading the code, I realized that the biases of the convolutions (if it has biases) and batch normalization is not quantized. However, the paper "AQD: Towards Accurate Quantized Object Detection" states
We propose an Accurate Quantized object Detection (AQD) method to fully get rid of floating-point computation in each layer of the network, including convolutional layers, normalization layers and skip connections.
Specifically, I cannot find the code that corresponds to the equations given in section 3.2.2 of the paper. Am I missing something? How does that work in the code? Am I not using the correct keywords? (I have used the default ones provided: keyword: ["debug", "dorefa", "lsq"]). The biases don't seem to be quantized either.
Additionally, in the default configurations, the weights are quantized using the adaptive mode var-mean (i.e. the weights are normalized before being quantized, to my understanding). Is this also part of the method adopted in the paper, or should I disable this if I am to replicate those results?
Rebasing the repo:
Import issuses from old url:
ShechemKS:
After reading the paper "AQD: Towards Accurate Quantized Object Detection", I have been using this repo to quantize an object detector. After reading the code, I realized that the biases of the convolutions (if it has biases) and batch normalization is not quantized. However, the paper "AQD: Towards Accurate Quantized Object Detection" states
We propose an Accurate Quantized object Detection (AQD) method to fully get rid of floating-point computation in each layer of the network, including convolutional layers, normalization layers and skip connections.
Specifically, I cannot find the code that corresponds to the equations given in section 3.2.2 of the paper. Am I missing something? How does that work in the code? Am I not using the correct keywords? (I have used the default ones provided: keyword: ["debug", "dorefa", "lsq"]). The biases don't seem to be quantized either.
Additionally, in the default configurations, the weights are quantized using the adaptive mode var-mean (i.e. the weights are normalized before being quantized, to my understanding). Is this also part of the method adopted in the paper, or should I disable this if I am to replicate those results?