submission2019 / cnn-quantization

Quantization of Convolutional Neural networks.
237 stars 59 forks source link

Inquiry about integer inference with bias or variance correction #18

Open rematchka opened 3 years ago

rematchka commented 3 years ago

Hi, thank you for sharing the source code of your work, it's amazing. I would like to inquire about pure integer inference using bias or variance correction as indicated in the paper. when using bias and variance correction on INT 8 quantized weights, the weights change into floating-point values, therefore do you take round/floor/ ceil in order to do Pure INT computation on quantitated weights? what kind of VLC algorithm did you use as it's not indicated on the paper? How do you calculate the average number of bits for the weights?