-
### Model/Pipeline/Scheduler description
Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted b…
-
In the paper, T2V or I2V LCM model is distilled from T2I LCM model.
But I thinks there is no training code or model...
Can you check once ?
-
In the LCM-Lora huggingface demo, lcm-lora-sdxl is proposed for stable-diffusion-xl-1.0-inpainting-0.1.
However, the lcm-lora-sdxl is trained for SDXL rather than SDXL-inpainting.
How does it work?…
-
Since LCM models and other checkpoint/lora/sampler combos use low and discrete CFG settings, it would be nice to have a resolution of 0.1 on the slider instead of whole integers.
-
For example: 2000 * 3000
-
Hi.
I use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.
Fooocus and other gradio implementations come with settings inputs that I …
-
Thank you for your great work!
I'm curious if we can make InstantID faster? It's 3x slower than the standard SDXL pipeline on my 4090 (2.73it/s vs 8it/s). What slows it down so much?
-
Hi guys! I test TCD lora but found that Euler's method combined with TCD achieve better results than the TCD sampler. I don't know why and this is normal?
-
I just want to use the default SD1.5 (or with some dreambooth safetensors), but the image I generated looks like this:
![7701717833622_ pic](https://github.com/kousw/experimental-consistory/assets/10…
-
We provide support for running validation inference during and after training in our officially maintained training examples. This is very helpful to keep track of the training progress.
We could …