-
### Model/Pipeline/Scheduler description
Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted b…
-
### Model/Pipeline/Scheduler description
https://github.com/Zeqiang-Lai/OpenDMD
### Open source status
- [X] The model implementation is available.
- [X] The model weights are available (Only…
-
I checked requirement.txt under diffusion_distillation, it shows:
numpy
matplotlib
tensorflow
tensorflow_datasets
jax
jaxlib
pillow
flax
ml_collections
clu
But what are the version of t…
-
-
### Model/Pipeline/Scheduler description
ConsistencyTTA, introduced in the paper [_Accelerating Diffusion-Based Text-to-Audio Generation
with Consistency Distillation_](https://arxiv.org/abs/2309.…
-
### Model/Dataset/Scheduler description
Classifier-free guided diffusion models have recently been shown to be highly effective at high-resolution image generation, and they have been widely used in …
-
when you train LCM_svd, you set svd_solver like,
svd_solver = SVDSolver(args.N, noise_scheduler.config.sigma_min, noise_scheduler.config.sigma_max, 7,0.7, 1.6)
why you change training timestep t…
-
Hi, thank you for your paper demystifying the "Variational Diffusion Models". In the original paper [Variational Diffusion Models version 1](https://arxiv.org/pdf/2107.00630v1) and 2, there is an equa…
-
Hi, thank you for the great work!
I was wondering if there is any reference or technical report that I could look into regarding the technique for distilling the Schnell and the Dev models from the…
-
https://arxiv.org/pdf/2310.01407.pdf
TBD