invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.64k stars 2.43k forks source link

[bug]: model selector drop down not changing model #1674

Closed charliesdad closed 1 year ago

charliesdad commented 1 year ago

Is there an existing issue for this?

OS

Windows

GPU

cuda

VRAM

12gb

What happened?

selecting a different model from the drop down list, initiates 'model loading' and a running light, but when it changes to 'model changed' it has not changed. It remains the default stable-diffusion 1.5

Screenshots

chrome_MulYJKVsBn

Additional context

fresh install - new user (have been using 1111) created 3 generations (invokes!?)

Contact Details

physis123 commented 1 year ago

What does the terminal say? I have the same issue and it's always the same error:


** model PFG_SAM2 could not be loaded: 'state_dict'
Traceback (most recent call last):
  File "c:\stable-diffusion\invokeai\ldm\invoke\model_cache.py", line 80, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "c:\stable-diffusion\invokeai\ldm\invoke\model_cache.py", line 230, in _load_model
    sd = sd['state_dict']
KeyError: 'state_dict'

** restoring stable-diffusion-1.5```

I've tried using !import model from the command line, and editing models.yaml directly. 
stormwulfren commented 1 year ago

I'm getting the same issue. Seems to apply to models that have been merged using Automatic1111.

TestModel = HentaiDiffusion 17 + Stable Diffusion 1.5, 0.05, ckpt

Each component works independently, but after merging, won't load in InvokeAI, but it will in 1111.

>> Offloading stable-diffusion-1.5 to CPU
>> Scanning Model: testmodel
>> Model Scanned. OK!!
>> Loading testmodel from D:\Standalone\stable-diffusion-webui\models\Stable-diffusion\gentest.ckpt
** model testmodel could not be loaded: 'state_dict'
Traceback (most recent call last):
  File "d:\standalone\invokeai\ldm\invoke\model_cache.py", line 80, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "d:\standalone\invokeai\ldm\invoke\model_cache.py", line 230, in _load_model
    sd = sd['state_dict']
KeyError: 'state_dict'

** restoring stable-diffusion-1.5

Interestingly, an old model I merged myself back on 21/10/2022, when I was first messing with blended models does work in InvokeAI.

saftle commented 1 year ago

I can confirm the same issue. After merging models in Automatic1111 they do not work in InvokeAI.

stormwulfren commented 1 year ago

I was able to fix this with a 2-liner. Whether it's the best practice, or will have downstream issues I have no idea. But it seems like Invoke expects the model weights to exist at an index named 'state_dict', but i kinda feel like that's not really the way it's supposed to be, because models that do have the weights in 'state_dict' also tend to have another nested 'state_dict' under it, with the rest of the params as siblings.

I'm new to the SD dev scene, so take my words with a pinch of salt, but I feel like InvokeAI is dealing with this edge case as if it were default.

Monkeypatch fix as follows:

ldm/invoke/model_cache.py: line 229

        del weight_bytes
        if 'state_dict' in sd:
            sd = sd['state_dict']
        model = instantiate_from_config(omega_config.model)

Basically checks to see if state dict exists in the dictionary, if it does, load from there, otherwise just skip.

saftle commented 1 year ago

Thank you so much @reznyt, that fixed it. Would you perhaps mind doing a pull request? Might help to get it looked at faster.

charliesdad commented 1 year ago

I was able to fix this with a 2-liner. Whether it's the best practice, or will have downstream issues I have no idea. But it seems like Invoke expects the model weights to exist at an index named 'state_dict', but i kinda feel like that's not really the way it's supposed to be, because models that do have the weights in 'state_dict' also tend to have another nested 'state_dict' under it, with the rest of the params as siblings.

I'm new to the SD dev scene, so take my words with a pinch of salt, but I feel like InvokeAI is dealing with this edge case as if it were default.

Monkeypatch fix as follows:

ldm/invoke/model_cache.py: line 229

        del weight_bytes
        if 'state_dict' in sd:
            sd = sd['state_dict']
        model = instantiate_from_config(omega_config.model)

Basically checks to see if state dict exists in the dictionary, if it does, load from there, otherwise just skip.

@reznyt Thats great, thanks.

Now, how do i monkeypatch??

Ive looked into it and not quite sure.

Thanks again

joshistoast commented 1 year ago

The fix by @reznyt seems to already be in my model_cache.py file, but alas still getting the error when switching to the model.

joshistoast commented 1 year ago

My error is slightly different:

** model furrystaberv2.2 could not be loaded: 'state_dict'
Traceback (most recent call last):
  File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\ldm\invoke\model_cache.py", line 80, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "C:\Users\artis\Documents\invokeai\.venv\lib\site-packages\ldm\invoke\model_cache.py", line 249, in _load_model
    vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
KeyError: 'state_dict'
visualinventor commented 1 year ago

I came to report the same issue with a huggingface .ckpt from here looks like everything is setup correctly given it was made on the SD 1.5 model but the same error happens as above.

** model inkpunk-v2 could not be loaded: 'state_dict'
Traceback (most recent call last):
  File "/Users/tim.watson/Dev/StableDiffusion/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_cache.py", line 80, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "/Users/tim.watson/Dev/StableDiffusion/invokeai/.venv/lib/python3.10/site-packages/ldm/invoke/model_cache.py", line 249, in _load_model
    vae_dict = {k: v for k, v in vae_ckpt["state_dict"].items() if k[0:4] != "loss"}
KeyError: 'state_dict'

From models.yaml

inkpunk-v2:
   description: Inkpunk Diffusion 2 (2.13 GB)
   weights: models/ldm/stable-diffusion-v1/inkpunkDiffusion_v2.ckpt
   config: configs/stable-diffusion/v1-inference.yaml
   width: 768
   height: 768
   vae: models/ldm/stable-diffusion-v1/diffusion_pytorch_model.bin

UPDATE: Just adding the new inkpunk.ckpt into the 1.5 model slot does work. Guessing I had the wrong VAE maybe?

stable-diffusion-1.5:
  description: The newest Stable Diffusion version 1.5 weight file (4.27 GB)
  weights: models/ldm/stable-diffusion-v1/inkpunkDiffusion_v2.ckpt
  config: configs/stable-diffusion/v1-inference.yaml
  width: 512
  height: 512
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  default: true
Lycantant commented 1 year ago

I can confirm the same issue. After merging models in Automatic1111 they do not work in InvokeAI.

So this is the reason why some of the models (ElyOrangeMix & Waifu Diffusion 1.4) I have don't work, thanks 🙂