changing setting sd_model_checkpoint to smallSDdistilled.ckpt [049baa16ad]: RuntimeError
Traceback (most recent call last):
File "/home/user/stable-diffusion-webui/modules/shared.py", line 633, in set
self.data_labels[key].onchange()
File "/home/user/stable-diffusion-webui/modules/call_queue.py", line 14, in f
res = func(*args, **kwargs)
File "/home/user/stable-diffusion-webui/webui.py", line 238, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 578, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 510, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/home/user/stable-diffusion-webui/modules/sd_models.py", line 299, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "/home/user/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.weight: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.0.bias: copying a param with shape torch.Size([1920]) from checkpoint, the shape in current model is torch.Size([2560]).
size mismatch for model.diffusion_model.output_blocks.4.0.in_layers.2.weight: copying a param with shape torch.Size([1280, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([1280, 2560, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.4.0.skip_connection.weight: copying a param with shape torch.Size([1280, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 2560, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.weight: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.0.bias: copying a param with shape torch.Size([960]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for model.diffusion_model.output_blocks.7.0.in_layers.2.weight: copying a param with shape torch.Size([640, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([640, 1280, 3, 3]).
size mismatch for model.diffusion_model.output_blocks.7.0.skip_connection.weight: copying a param with shape torch.Size([640, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 1280, 1, 1]).
Is there an existing issue for this?
What would your feature do ?
Add support for running 1.5 model when some blocks/layers are missing from the unet.
links
Proposed workflow
Loading the .ckpt file gives an error:
Additional information
converted
small-sd
model DL here