Closed anonymous-person closed 2 weeks ago
Very weird error.
The code in e2fe29c runs after inference, when the image is saved, but this error is occurring on model load. The raise NotImplementedError
happens when the model scheduler config is wrong/unsupported, but the values printed are all as expected.
Try a different model.
Hmm, with the latest code, it works with public models I've downloaded, like sd15, flux1d, dreamshaper. But I get the error when I try to use any of my DreamBooth custom trained models. 😭
So 90a6970 gives this error:
Loading Model: {'checkpoint_info': {'filename': '/home/anon/stable-diffusion-webui-forge/models/Stable-diffusion/myDreamboothModel2.safetensors', 'hash': 'a5922235'}, 'additional_modules': ['/home/anon/stable-diffusion-webui-forge/models/VAE/vae-ft-mse-840000-ema-pruned-sd15.safetensors'], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'unet': 686, 'vae': 250, 'text_encoder': 197, 'ignore': 0}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
IntegratedAutoencoderKL Unexpected: ['model_ema.decay', 'model_ema.num_updates']
Traceback (most recent call last):
File "/home/anon/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 30, in work
self.result = self.func(*self.args, **self.kwargs)
File "/home/anon/stable-diffusion-webui-forge/modules/txt2img.py", line 124, in txt2img_function
processed = processing.process_images(p)
File "/home/anon/stable-diffusion-webui-forge/modules/processing.py", line 836, in process_images
manage_model_and_prompt_cache(p)
File "/home/anon/stable-diffusion-webui-forge/modules/processing.py", line 804, in manage_model_and_prompt_cache
p.sd_model, just_reloaded = forge_model_reload()
File "/home/anon/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/anon/stable-diffusion-webui-forge/modules/sd_models.py", line 504, in forge_model_reload
sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts)
File "/home/anon/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/anon/stable-diffusion-webui-forge/backend/loader.py", line 318, in forge_loader
or yaml_config.get('model', {}).get('params', {}).get('denoiser_config', {}).get('params', {}).get('scaling_config').get('target', '')
AttributeError: 'NoneType' object has no attribute 'get'
'NoneType' object has no attribute 'get'
I've seen similar errors before, so I deleted my config.json and restarted webui. But as soon as I use a DreamBooth model, I get the same AttributeError: 'NoneType' object has no attribute 'get'
error.
Can anybody else confirm this with a sd15 based DreamBooth model? Is it just me?
@catboxanon Any ideas?
I think I can now replicate the issue by creating a dummy YAML file for a checkpoint. The issue seems to be that the check for prediction_type in the YAML defaults to an empty string, instead it should default to 'epsilon'.
Can you try editing /home/anon/stable-diffusion-webui-forge/backend/loader.py
line 318:
currently:
or yaml_config.get('model', {}).get('params', {}).get('denoiser_config', {}).get('params', {}).get('scaling_config', {}).get('target', '')
change to:
or yaml_config.get('model', {}).get('params', {}).get('denoiser_config', {}).get('params', {}).get('scaling_config', {}).get('target', 'epsilon')
This fixes it with my fake YAML test.
Actual fix would be to simply check if the .yaml config's prediction type is truthy or not (before I mistakenly was only comparing if it was None
). The estimated config's prediction type (which for most cases will be epsilon) should be read from if no prediction type is set in the .yaml, rather than forcefully falling back to epsilon if the .yaml exists.
These (now merged) PRs should fix it. https://github.com/lllyasviel/stable-diffusion-webui-forge/pull/2272 https://github.com/lllyasviel/stable-diffusion-webui-forge/pull/2273
@catboxanon Unfortunately, #2272 #2273 did not fix the problem. I've attached a copy of my model's yaml. myDreamBoothModel.yaml.txt
@DenOfEquity However, your suggested loader.py hack solves my problem. Can you please commit this fix? Thanks!
@anonymous-person I just tested with that .yaml config, and with both of my PRs applied (which are currently merged to main
), the returned result is epsilon
(which is identical to the fix DenOfEquity made). Are you sure you're on the latest commit?
@catboxanon I pulled main again, and this time it works without the manual hack. Thanks.
I pulled the latest build today (e2fe29c104a3893372898281a4c355ef30fb00f0) and now I get this error when I try to run txt2img.
Startup looks like this:
When I click Generate, I get this:
I tried choosing different Sampling Method and Schedule Type in the UI, but it didn't help. I also tried clicking Apply Settings and restarted webui, but it didn't help either.
I roll back to e5b34baae646077bd2a70155f0a92f42bb763a29, and the problem goes away.