Open MKFMIKU opened 3 months ago
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Model/Pipeline/Scheduler description
Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted by CVPR24, CoDi is based on consistency models and offers a significant advancement in accelerating latent diffusion models. This method enables faster generation in just 1-4 steps.
Key Features:
stablediffusionapi/juggernaut-reborn
can be accelerated to generate results in 4 steps without the need for distillation of the juggernaut-reborn model.The difference between Conditional Diffusion Distillation and recent LCM-LORA is listed below
Open source status
Provide useful links for the implementation
project page: https://fast-codi.github.io paper: https://arxiv.org/abs/2310.01407
@MKFMIKU will submit a PR for providing the trainng code in PyTorch and a rough pretrained model.