Xiuyu-Li / q-diffusion

[ICCV 2023] Q-Diffusion: Quantizing Diffusion Models.
https://xiuyuli.com/qdiffusion/
MIT License
301 stars 21 forks source link

Inference speed mechanism or Model size compression ? #1

Closed sravanthOppo27 closed 4 months ago

Xiuyu-Li commented 1 year ago

Hi, at the moment, only simulated quantization is implemented in this repository for FID evaluation. We will be releasing a significantly updated arXiv paper that includes comprehensive data on model sizes.

continue-revolution commented 1 year ago

Do you plan to build an end-to-end quantization for stable diffusion? If yes, I want to know the approximate time.

Xiuyu-Li commented 1 year ago

@continue-revolution Hi Chengsong,

End-to-end quantization is on our roadmap, and we're aiming to release it in the coming months. Please stay tuned for updates, and thanks for your interest in the project! We also welcome contributions from the open-source community to help expedite this process.

continue-revolution commented 1 year ago

@Xiuyu-Li Glad to contribute in some way. Have sent an email to your berkeley address.

Jack47 commented 1 year ago

can't wait to see "a significantly updated arXiv paper that includes comprehensive data on model sizes". cheers~

sravanthOppo27 commented 1 year ago

When can we expect the code for Calibration ?

lucasjinreal commented 11 months ago

Any date we expected the speedup with quantization?