megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

Why are weight and bias in QIntLayerNorm not quantified? #42

Open FungSean opened 1 year ago

FungSean commented 1 year ago

Why are weight and bias in QIntLayerNorm not quantified? So there are non-integer operations in QIntLayerNorm. Does QIntLayerNorm not implement integer-only inference because weight and bias are not quantified?