megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

improvement(layers): simplify x_q #23

Closed tpoisonooo closed 2 years ago

tpoisonooo commented 2 years ago

数值上看,两个计算是近似的,且后者精度损失小一点。

修改前:

x_q = (x/in_scale).round() * (in_scale/in_scale1).round()

修改后:

x_q = (x/in_scale1).round()
tpoisonooo commented 2 years ago

啊瞬间反应过来啥意思了。没事了。