issues
search
Xiuyu-Li
/
q-diffusion
[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
https://xiuyuli.com/qdiffusion/
MIT License
290
stars
20
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Tensorrt employ the q-diffusion's recontruction method to quantize SD-XL?
#37
hanhanpp
opened
2 months ago
0
Q-Diffusion support for Stable Diffusion XL (CPU)
#36
badhri-intel
opened
2 months ago
0
How to do it in SDXL-turbo?
#35
ApolloRay
opened
3 months ago
0
'functions' package error & how to extract FID
#34
parkjjoe
opened
3 months ago
0
Can you please add a tutorial on quantification to the README?
#33
ningmenghongcha
opened
3 months ago
0
BOPs measurement
#32
rocco-manz
opened
3 months ago
0
Stable diffusion generated samples using quantized checkpoint looks strange
#31
colorjam
opened
4 months ago
2
How to deploy in an environment without GPUs?
#30
loonglongdada
closed
4 months ago
0
Compatibility with MacOSX?
#29
nachiket
opened
6 months ago
0
如何获取量化参数
#28
miaott1234
opened
7 months ago
2
QDiffusion for Stable Diffusion
#27
stein-666
opened
7 months ago
0
Why quant QKMatMul as a block while it has no submodule?
#26
Sugar929
opened
8 months ago
6
What is the minimum number of datasets that meet the requirements of text-to-image calibration?
#25
hanhanpp
opened
8 months ago
0
errors in executing in LSUN-bed w8a8 quantization
#24
Sugar929
opened
9 months ago
0
End-to-End Quantization for Speedup and Memory Savings: Inviting Contributions!
#23
Xiuyu-Li
opened
9 months ago
2
Does q-diffusion work on SDXL?
#22
TruthSearcher
closed
3 months ago
3
calibration datasets
#21
hanhanpp
closed
8 months ago
0
Add calibration code
#20
Xiuyu-Li
closed
9 months ago
0
Model sizes
#19
prinshul
opened
9 months ago
1
Would you please provide the parameters for the LSQ and the block reconstruction? Thanks a lot
#18
yuzheyao22
closed
9 months ago
1
Open-source more code?
#17
lingffff
closed
9 months ago
1
Question about the inference process
#16
JiaojiaoYe1994
opened
11 months ago
0
Why w4a8 quantization method have not accelerate the inferrence speed of Stable Diffusion models?
#15
felixslu
closed
3 months ago
2
Why this quantization model need more than 24GB GPU memory which is larger than ideal 500M?
#14
felixslu
opened
11 months ago
4
Error: No such file or directory: 'models/ldm/stable-diffusion-v1/model.ckpt'
#13
felixslu
closed
11 months ago
1
About the quantized model
#12
shiyuetianqiang
closed
3 months ago
0
load cifar_w8a8_ckpt.pth
#11
foreverlove944
closed
12 months ago
4
Code for model calibration
#10
Cheeun
closed
9 months ago
5
模型大小
#9
ZhibinPeng
closed
1 year ago
1
Loading quantized model checkpoint error
#8
arthursunbao
closed
1 year ago
1
Extremely high VRAM usage
#7
easonoob
closed
3 months ago
2
Images are broken in readme.md
#6
6174
closed
1 year ago
1
Calibration Process in the 'resume_cali_model' Function
#5
wangjialinEcopia
closed
11 months ago
4
What is the main comparison of the paper
#4
sravanthOppo27
closed
1 year ago
1
the weight format of w4a8 is fp32?
#3
gongqiang
closed
1 year ago
1
Activation quantization of Stable Diffusion.
#2
jianyuheng
opened
1 year ago
0
Inference speed mechanism or Model size compression ?
#1
sravanthOppo27
closed
3 months ago
7