lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.68k stars 175 forks source link

[Feature Request]: Proper LCM support #316

Open patientx opened 8 months ago

patientx commented 8 months ago

Is there an existing issue for this?

What would your feature do ?

There is a plugin for LCM's first model that works here but they recently added a way to use all sd 1.5 and sdxl models. It would be cool to be able to use them here. Tried sdnext and there seems to be a problem regarding memory , maybe you can lean into this and make a proper support since this is mainly directml and for all other sdwebui's directml seems to be waaay behind than nvidia stuff.

Proposed workflow

It is probably better to be able to use LORA's rather than individual models.

https://huggingface.co/collections/latent-consistency/latent-consistency-models-weights-654ce61a95edd6dffccef6a8 https://huggingface.co/collections/latent-consistency/latent-consistency-models-loras-654cdd24e111e16f0865fba6

Additional information

No response

lshqqytiger commented 8 months ago

Try SD.Next. I already implemented the same DirectML support, and it supports LCM scheduler too.

patientx commented 8 months ago

Can't use it without out of memory errors on medvram etc with sd.next . Only hypertile works even with 512x512 there but if I use it then the default LCM sampler gives garbled output. There is definetly some memory problems there. LCM on comfyui also have the same oom errors but at least hypertile works without problems there. Normally I can generate up to 960x768 on normal sd 1.5 models without any errors.

patientx commented 6 months ago

Can we get lcm now that you have updated the app itself ?