megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

关于LIS的计算问题 #35

Closed caoliyi closed 1 year ago

caoliyi commented 1 year ago

关于int_exp中的计算 n为什么是30? def int_exp(x_int, scaling_factor): x0 = -0.6931 # -ln2 n = 30 # sufficiently large integer x0_int = torch.floor(x0 / scaling_factor) x_int = torch.max(x_int, n * x0_int) q = torch.floor(x_int / x0_int) r = x_int - x0_int * q exp_int, exp_scaling_factor = int_polynomial(r, scaling_factor) exp_int = torch.clamp(torch.floor(exp_int * 2**(n - q)), min=0) scaling_factor = exp_scaling_factor / 2**n return exp_int, scaling_factor 关于int_polynomial中的coef请问是怎么近似得到的吗? coef = [0.35815147, 0.96963238, 1.] # ax**2 + bx + c

关于log_round中,请问这一步操作的意义是什么? big[extra_mask] = big[extra_mask] + 1

linyang-zhh commented 1 year ago
  1. refer to I-BERT.

  2. rounding x_log_floor in log level