Open GoatWu opened 10 months ago
Thank you! And I have another question. In FQ-ViT, both the softmax and LayerNorm layers are computed in integer form. So, would it be unfair to compare the accuracy of the method in this paper with FQ-ViT?
Our method follows the settings of the previous PTQ4ViT and APQ-ViT, so comparisons with the PTQ4ViT and APQ-ViT are exactly fair.
Hi,
In this work, the output (activation) of LayerNorm is quantized, while retaining the floating-point computation of LayerNorm itself.