0xbitches / sd-webui-lcm

Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
MIT License
614 stars 43 forks source link

[Diffusers update] Switch to official way of loading LCM #26

Open patrickvonplaten opened 8 months ago

patrickvonplaten commented 8 months ago

Hey @0xbitches,

Just a heads-up, we just released diffusers 0.22 which means that LCM is now supported as a native pipeline. We strongly recommend to use the following code snippet:

from diffusers import DiffusionPipeline
import torch

pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")

# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)

prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4 

images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images

If you still want to use the community pipelines, please make sure to use the following: https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7#usage-deprecated

danila-pryamonosov-hypemasters commented 8 months ago

well, new aproach worked but generations become a lot slower :(

patrickvonplaten commented 8 months ago

Hmm they shouldn't - can you open an issue on diffusers? Also are you sure you're using both in the same precision (torch.float16, torch.float32)?