ziplab / SAQ

This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".
Apache License 2.0
40 stars 7 forks source link

Quantize_first_last_layer #2

Open mmmiiinnnggg opened 2 years ago

mmmiiinnnggg commented 2 years ago

Hi! I noticed that in your code, you set bits_weights=8 and bits_activations=32 for first layer as default, it's not what is claimed in your paper " For the first and last layers of all quantized models, we quantize both weights and activations to 8-bit. " And I see an accuracy drop if I adjust the bits_activations to 8 for the first layer, could u please explain what is the reason? Thanks!

liujingcs commented 1 year ago

We do not apply quantization to the input images since they have been quantized to 8-bit during image preprocessing.