can we get LCM Sampler support for inference and others https://github.com/luosiallen/latent-consistency-modelhttps://latent-consistency-models.github.io/ just tested the base LCM 1.5 lora its real fast even without the actual LCM sampler itself like 10 images in 0.6secs on SD 1.5 and with XL its 10 images in 1.3secs only needs 2-8steps and 1-2cfg with any sampler but that depends on the GPU Really
It works fine now, just install the lcm-lora lora as you would any other. Well, "fine" is a relative term. I did find a UX bug related to this, but images are generated just fine.
can we get LCM Sampler support for inference and others https://github.com/luosiallen/latent-consistency-model https://latent-consistency-models.github.io/ just tested the base LCM 1.5 lora its real fast even without the actual LCM sampler itself like 10 images in 0.6secs on SD 1.5 and with XL its 10 images in 1.3secs only needs 2-8steps and 1-2cfg with any sampler but that depends on the GPU Really