huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.18k stars 5.21k forks source link

Add Conditional Diffusion Distillation #8309

Open MKFMIKU opened 3 months ago

MKFMIKU commented 3 months ago

Model/Pipeline/Scheduler description

Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted by CVPR24, CoDi is based on consistency models and offers a significant advancement in accelerating latent diffusion models. This method enables faster generation in just 1-4 steps.

Key Features:

image_1

The difference between Conditional Diffusion Distillation and recent LCM-LORA is listed below

Conditional Diffusion Distillation (CoDi) LCM-LORA
Scheduler Anything (Euler is tested) LCM
Adapter ControlNet LORA
Full-training Available None
Backbone SD1.5 (including its variant like juggernaut-reborn) SD and SXL

Open source status

Provide useful links for the implementation

project page: https://fast-codi.github.io paper: https://arxiv.org/abs/2310.01407

@MKFMIKU will submit a PR for providing the trainng code in PyTorch and a rough pretrained model.

github-actions[bot] commented 4 days ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.