megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

Input quantization #7

Closed xqjiang423 closed 2 years ago

xqjiang423 commented 2 years ago

Hi, I was wondering if the inputs are quantized as well? Does Quantization just cover the weights, LayerNorm and activations? Thanks!

linyang-zhh commented 2 years ago

@xqjiang423 Hi! Inputs are also quantized. You can check it in this line.

As for the second question, we quantize all modules, which include LayerNorm, Softmax and weights/activations in Conv/Linear layers.