issues
search
megvii-research
/
FQ-ViT
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
293
stars
47
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to save quantized Int8 weights? not feature outputs?
#48
dedoogong
opened
4 months ago
0
How to calculate Power-of-Two Factor in eq. 11 ?
#47
XA23i
opened
6 months ago
0
mlp层中的 GELU 并没有量化?
#46
tianhualefei
opened
7 months ago
1
公式28是不是有问题
#45
lonleyodd
opened
8 months ago
0
How to understand the concept of channel before layernorm?
#44
wplf
opened
10 months ago
0
为什么BaseQuantizer的forward要反量化?
#43
zouzihan3
closed
10 months ago
2
Why are weight and bias in QIntLayerNorm not quantified?
#42
FungSean
opened
12 months ago
0
Why can channel-wise quantization be used for weights here?
#41
GoatWu
opened
1 year ago
0
Quantization
#40
FungSean
opened
1 year ago
1
为什么运行浮点模型和量化后模型推理时间差不多
#39
liuxy1103
closed
1 year ago
1
Questions on saving the model
#38
jiaruzouu
closed
1 year ago
1
Some Questions about the Quantized Inference
#37
PeiyanFlying
closed
1 year ago
1
QIntLayerNorm 讲 self.mode = 'int'
#36
PeiyanFlying
closed
1 year ago
5
关于LIS的计算问题
#35
caoliyi
closed
1 year ago
1
关于QINTLayerNorm和ptf中的计算问题
#34
caoliyi
closed
1 year ago
1
QIntLayerNorm 中 Get_MN中的bit
#33
caoliyi
closed
1 year ago
3
Inference speed of quantized model is lower than normal model
#32
jhss
closed
1 year ago
6
A bug of LayerNorm
#31
Ther-nullptr
closed
1 year ago
4
Confused about Fake Quantization
#30
jhss
closed
1 year ago
1
tensorrt
#29
shuyuan-wang
closed
1 year ago
1
How to visualize the Figure3?
#28
YoloEliwa
closed
1 year ago
3
when i use the method to do object detection, how to apply the calibration step for that detector in MMDet framework?
#27
Wuyy-fairy
closed
1 year ago
1
最后没有精度的原因可能有哪些?
#26
roncedupon
closed
1 year ago
1
必须要使用imagenet数据集吗,我用别的数据集acc一直是0 直接卡没了
#25
xushuo999
closed
1 year ago
1
请问对于2/4 bits是否可扩展?
#24
Facatt
closed
1 year ago
1
improvement(layers): simplify x_q
#23
tpoisonooo
closed
1 year ago
1
LN的计算不是整型的
#22
zysxmu
closed
2 years ago
9
log_int_softmax int64 问题
#21
tpoisonooo
closed
1 year ago
8
在readme里面增加 join us.
#20
PeiqinSun
closed
2 years ago
0
ViT-B add ptf reshape_tensor 问题
#19
tpoisonooo
closed
2 years ago
6
improvement(CI): add CI and copyright
#18
tpoisonooo
closed
2 years ago
3
ViT-B input quant=False 测试问题
#17
tpoisonooo
closed
2 years ago
1
Reproducing 8/8/8 for ViT-Base
#16
nfrumkin
closed
2 years ago
5
can not get scale
#15
youdutaidi
closed
2 years ago
5
Update test_quant.py
#14
tpoisonooo
closed
2 years ago
1
A question about Log2Quantization when Activations or Weights equal to zeros
#13
youdutaidi
closed
2 years ago
1
Could you offer code of PTQ test on COCO datasets ?
#12
youdutaidi
closed
2 years ago
1
how to save the quantized model?
#11
Wuyy-fairy
closed
2 years ago
1
Quantized model with FQ-ViT
#10
uniqzheng
closed
2 years ago
1
How to Get a pretrained model to do PTQ?
#9
youdutaidi
closed
2 years ago
5
Contact u
#8
CryptonQQQ
closed
2 years ago
1
Input quantization
#7
xqjiang423
closed
2 years ago
1
Dequantization
#6
ebsrn
closed
2 years ago
4
The file structure of COCO dataset
#5
yifu-ding
closed
2 years ago
5
SwinTransformer3D
#4
wangjingg
closed
2 years ago
1
What is the purpose of clamping zero point in the range of qmin and qmax?
#3
airacid
closed
2 years ago
3
Is GELU operation quantized as well?
#2
Kevinpsk
closed
2 years ago
1
How to convert to int8 model?
#1
detectRecog
closed
2 years ago
1