Closed sammcj closed 9 months ago
Only LCM-LoRAs are supported through the model manager at the moment
Ohhhhh I see, I'm sorry I didn't realise that!
So in theory this should work? https://huggingface.co/latent-consistency/lcm-lora-sdxl
I'll close this issue for now.
That should work - sorry, it's a little confusing! I've clarified the release notes
np, love your work! :)
If you want to try latent-consistency/lcm-sdxl which is basically a unet model not a complete SD model, you can use my cut and paste job Vargol/lcm_sdxl_full_model which adds all the other bits require and loads via the Model Manager or at least did in the last release candidate.
I haven't compared results properly yet but I find it a bit faster per step that the LCM LoRA, but obviously the LoRA can be used with any compatible model, where lcm_sdxl_full_model is a one off.
Thanks @Vargol I'll give it a shot 😄
First of all - love your work InvokeAI folks! ❤️
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
24GB
What version did you experience this issue on?
3.4.0
What happened?
When downloading a LCM model via the model mananger, or when trying to import it from a file - InvokeAI fails to load (or save) the model.
For example, using the popular lcm-sdxl model:
latent-consistency/lcm-sdxl
I did read the release notes and saw that LCM use a node connector (not quite sure from the wording of this what the impact is to the user) - but also that they should be installable via the model mananger.
Screenshots
Additional context
Contact Details
No response