megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

mlp层中的 GELU 并没有量化? #46

Open tianhualefei opened 9 months ago

tianhualefei commented 9 months ago

mlp层中的 GELU 并没有量化? 看代码中是直接调用的nn.GELU层?

(mlp): Mlp( (fc1): QLinear( in_features=384, out_features=1536, bias=True (quantizer): UniformQuantizer() ) (act): GELU() (qact1): QAct( (quantizer): UniformQuantizer() ) (fc2): QLinear( in_features=1536, out_features=384, bias=True (quantizer): UniformQuantizer() ) (qact2): QAct( (quantizer): UniformQuantizer() ) (drop): Dropout(p=0.0, inplace=False) ) (qact4): QAct( (quantizer): UniformQuantizer() ) )

XA23i commented 7 months ago

yes, I think it is "partially quantized ViT".