rupeshs / fastsdcpu

Fast stable diffusion on CPU
MIT License
1.51k stars 123 forks source link

lcm-lora-sdxl & lcm-lora-ssd-1b don't work #82

Closed onlyreportingissues closed 1 year ago

onlyreportingissues commented 1 year ago

anything-v5 + latent-consistency/lcm-lora-sdxl:

***** Init LCM-LoRA pipeline - stablediffusionapi/anything-v5 *****
Loading pipeline components...:  14%|████████████████████████▍                                                                                                                                                  | 1/7 [00:01<00:07,  1.23s/it]/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  4.41it/s]
Traceback (most recent call last):
  File "/home/saidox/Downloads/fastsdcpu-main/src/frontend/gui/image_generator_worker.py", line 29, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/frontend/gui/app_window.py", line 538, in generate_image
    images = self.context.generate_text_to_image(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/context.py", line 31, in generate_text_to_image
    self.lcm_text_to_image.init(
  File "/home/saidox/Downloads/fastsdcpu-main/src/backend/lcm_text_to_image.py", line 139, in init
    self.pipeline = get_lcm_lora_pipeline(
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/backend/pipelines/lcm_lora.py", line 17, in get_lcm_lora_pipeline
    pipeline.load_lora_weights(
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/diffusers/loaders.py", line 1208, in load_lora_weights
    self.load_lora_into_unet(
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/diffusers/loaders.py", line 1642, in load_lora_into_unet
    incompatible_keys = set_peft_model_state_dict(unet, state_dict, adapter_name)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/peft/utils/save_and_load.py", line 158, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
    size mismatch for down_blocks.1.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.0.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.0.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.1.resnets.0.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.0.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.0.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.1.resnets.1.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.1.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.1.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
    size mismatch for up_blocks.1.resnets.2.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.2.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.2.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.upsamplers.0.conv.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.upsamplers.0.conv.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
    size mismatch for up_blocks.2.resnets.0.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.0.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.0.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.2.resnets.1.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.1.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.1.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 960, 3, 3]).
    size mismatch for up_blocks.2.resnets.2.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.2.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.2.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 960, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for mid_block.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for mid_block.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for mid_block.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for mid_block.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for mid_block.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
onlyreportingissues commented 1 year ago

dreamshaper-8 + latent-consistency/lcm-lora-ssd-1b:

***** Init LCM-LoRA pipeline - Lykon/dreamshaper-8 *****
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:00<00:00, 14.30it/s]
Traceback (most recent call last):
  File "/home/saidox/Downloads/fastsdcpu-main/src/frontend/gui/image_generator_worker.py", line 29, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/frontend/gui/app_window.py", line 538, in generate_image
    images = self.context.generate_text_to_image(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/context.py", line 31, in generate_text_to_image
    self.lcm_text_to_image.init(
  File "/home/saidox/Downloads/fastsdcpu-main/src/backend/lcm_text_to_image.py", line 139, in init
    self.pipeline = get_lcm_lora_pipeline(
                    ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/src/backend/pipelines/lcm_lora.py", line 17, in get_lcm_lora_pipeline
    pipeline.load_lora_weights(
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/diffusers/loaders.py", line 1208, in load_lora_weights
    self.load_lora_into_unet(
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/diffusers/loaders.py", line 1642, in load_lora_into_unet
    incompatible_keys = set_peft_model_state_dict(unet, state_dict, adapter_name)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/peft/utils/save_and_load.py", line 158, in set_peft_model_state_dict
    load_result = model.load_state_dict(peft_model_state_dict, strict=False)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/saidox/Downloads/fastsdcpu-main/env/lib64/python3.11/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
    size mismatch for down_blocks.1.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.1.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 640, 1, 1]).
    size mismatch for down_blocks.1.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for down_blocks.2.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for down_blocks.2.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([1280, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.0.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.0.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.1.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.1.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn1.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_q.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_k.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2048]) from checkpoint, the shape in current model is torch.Size([64, 768]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_v.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.attn2.to_out.0.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.0.proj.lora_B.default_0.weight: copying a param with shape torch.Size([5120, 64]) from checkpoint, the shape in current model is torch.Size([10240, 64]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 2560]) from checkpoint, the shape in current model is torch.Size([64, 5120]).
    size mismatch for up_blocks.1.attentions.2.transformer_blocks.0.ff.net.2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.attentions.2.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.1.attentions.2.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.1.resnets.0.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.0.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.0.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1920, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.resnets.0.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 2560, 3, 3]).
    size mismatch for up_blocks.1.resnets.1.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.1.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.1.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 1280, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 2560, 1, 1]).
    size mismatch for up_blocks.1.resnets.1.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
    size mismatch for up_blocks.1.resnets.2.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64]) from checkpoint, the shape in current model is torch.Size([1280, 64]).
    size mismatch for up_blocks.1.resnets.2.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.resnets.2.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
    size mismatch for up_blocks.1.resnets.2.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.1.upsamplers.0.conv.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.1.upsamplers.0.conv.lora_B.default_0.weight: copying a param with shape torch.Size([640, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1280, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1920, 3, 3]).
    size mismatch for up_blocks.2.resnets.0.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.0.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.0.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 960, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1920, 1, 1]).
    size mismatch for up_blocks.2.resnets.0.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1280, 3, 3]).
    size mismatch for up_blocks.2.resnets.1.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.1.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.1.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 1280, 1, 1]).
    size mismatch for up_blocks.2.resnets.1.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv1.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 960, 3, 3]).
    size mismatch for up_blocks.2.resnets.2.conv1.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.time_emb_proj.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64]) from checkpoint, the shape in current model is torch.Size([640, 64]).
    size mismatch for up_blocks.2.resnets.2.conv2.lora_A.default_0.weight: copying a param with shape torch.Size([64, 320, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 640, 3, 3]).
    size mismatch for up_blocks.2.resnets.2.conv2.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_A.default_0.weight: copying a param with shape torch.Size([64, 640, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 960, 1, 1]).
    size mismatch for up_blocks.2.resnets.2.conv_shortcut.lora_B.default_0.weight: copying a param with shape torch.Size([320, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([640, 64, 1, 1]).
rupeshs commented 1 year ago

For dreamshaper v8 you need to select LCM lora model sd1.5 lora. Lora and base model must be compatible