invoke-ai / InvokeAI

InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, supports terminal use through a CLI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
22.77k stars 2.35k forks source link

[bug]: LCM models fail to load - 'UNet2DConditionModel is not one of the supported classes' #5116

Closed sammcj closed 9 months ago

sammcj commented 9 months ago

First of all - love your work InvokeAI folks! ❤️

Is there an existing issue for this?

OS

Linux

GPU

cuda

VRAM

24GB

What version did you experience this issue on?

3.4.0

What happened?

When downloading a LCM model via the model mananger, or when trying to import it from a file - InvokeAI fails to load (or save) the model.

For example, using the popular lcm-sdxl model:

  1. Model manager
  2. Paste in latent-consistency/lcm-sdxl
  3. Wait for the download
  4. Return to machine to check the status - the UI has no error messages or warnings displaying, but the model is not installed, I suspect an error might pop up and go away by itself without leaving a hint / warning.
  5. Check the logs:
config.json: 100% 1.83k/1.83k [00:00<00:00, 31.5MiB/s]
diffusion_pytorch_model.safetensors: 100% 10.3G/10.3G [01:40<00:00, 102MiB/s]
[2023-11-17 12:37:29,715]::[InvokeAI]::ERROR --> Unable to determine model type for /invokeai/models/tmp2_u67ytk/lcm-sdxl; class UNet2DConditionModel is not one of the supported classes [StableDiffusionPipeline, StableDiffusionInpaintPipeline, StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, StableDiffusionXLInpaintPipeline, AutoencoderKL, AutoencoderTiny, ControlNetModel, CLIPVisionModelWithProjection, T2IAdapter]

I did read the release notes and saw that LCM use a node connector (not quite sure from the wording of this what the impact is to the user) - but also that they should be installable via the model mananger.

Screenshots

image

Additional context

Contact Details

No response

Millu commented 9 months ago

Only LCM-LoRAs are supported through the model manager at the moment

sammcj commented 9 months ago

Ohhhhh I see, I'm sorry I didn't realise that!

sammcj commented 9 months ago

So in theory this should work? https://huggingface.co/latent-consistency/lcm-lora-sdxl

image

sammcj commented 9 months ago

I'll close this issue for now.

Millu commented 9 months ago

That should work - sorry, it's a little confusing! I've clarified the release notes

sammcj commented 9 months ago

np, love your work! :)

Vargol commented 9 months ago

If you want to try latent-consistency/lcm-sdxl which is basically a unet model not a complete SD model, you can use my cut and paste job Vargol/lcm_sdxl_full_model which adds all the other bits require and loads via the Model Manager or at least did in the last release candidate.

I haven't compared results properly yet but I find it a bit faster per step that the LCM LoRA, but obviously the LoRA can be used with any compatible model, where lcm_sdxl_full_model is a one off.

sammcj commented 9 months ago

Thanks @Vargol I'll give it a shot 😄