issues
search
zkkli
/
RepQ-ViT
[ICCV 2023] RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers
Apache License 2.0
102
stars
8
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
reproduced performance is poor in swin-base
#10
XA23i
opened
2 months ago
1
Unable to run the code
#9
harshaUwm163
opened
3 months ago
0
What does reconstruction mean in quantization.
#8
XA23i
closed
4 months ago
2
I'm sorry, why did I run the example and still get a quantitative model of float32?
#7
HJQjob
closed
1 month ago
0
How to understand the concept of channel before layernorm?
#6
xyhe1996
closed
7 months ago
4
Discrepancy between reported and tested mAP for `mask_rcnn_swin_small`
#5
GoatWu
opened
8 months ago
0
How to save and load the quantized model?
#4
KaidDuong
opened
9 months ago
2
AttributeError: 'Attention' object has no attribute 'reduction'
#3
Arthur-Ling
opened
9 months ago
1
Hello! Here's another question about the log sqrt2 quantization.
#2
GoatWu
closed
8 months ago
0
Hello! May I inquire if the LayerNorm module has not been quantized in this model?
#1
GoatWu
opened
10 months ago
3