zkkli / RepQ-ViT

[ICCV 2023] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
Apache License 2.0
102 stars 8 forks source link

Hello! May I inquire if the LayerNorm module has not been quantized in this model? #1

Open GoatWu opened 10 months ago

zkkli commented 10 months ago

Hi,

In this work, the output (activation) of LayerNorm is quantized, while retaining the floating-point computation of LayerNorm itself.

GoatWu commented 10 months ago

Thank you! And I have another question. In FQ-ViT, both the softmax and LayerNorm layers are computed in integer form. So, would it be unfair to compare the accuracy of the method in this paper with FQ-ViT?

zkkli commented 10 months ago

Our method follows the settings of the previous PTQ4ViT and APQ-ViT, so comparisons with the PTQ4ViT and APQ-ViT are exactly fair.