Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
16.98k stars 1.96k forks source link

[Bug]: RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 1024x320) #2240

Closed TomYule closed 11 months ago

TomYule commented 11 months ago

Is there an existing issue for this?

What happened?

When I tried to zoom in on the image, something went wrong There are a lot of models that have been used and similar errors occur

https://huggingface.co/thibaud/controlnet-sd21
https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

mod

control_v11p_sd21_depth 
control_v11p_sd15_inpaint

Steps to reproduce the problem

  1. Get picture information
    parameters
    clownfish,coral reef,bubble,kelp,
    Negative prompt: 3d,realistic,badhandv4:1.4,EasyNegative,ng_deepnegative_v1_75t,bad anatomy,futa,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,monochrome,grayscale,(pointed chin),skin spots,acnes,skin blemishes(fat:1.2),facing away,looking away,
    Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3483093119, Size: 512x512, Model hash: d2b9e5240c, Model: zavychromaxl_v21, ENSD: 31337, Version: v1.6.0 
  2. Text to generate images
  3. Modify the length of the image
  4. controlNet

01-1

Screenshot 2023-11-07 at 10-56-52 Stable Diffusion

What should have happened?

The image was successfully generated

Commit where the problem happens

webui:
版本: v1.6.0  •  python: 3.10.11  •  torch: 2.0.1+cu118  •  xformers: 0.0.21  •  gradio: 3.41.2  •  checkpoint: d2b9e5240c

controlnet: ControlNet v1.1.406

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--theme dark --xformers --api --autolaunch --listen

List of enabled extensions

controlNet

Console logs

Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.6.0
Commit hash: 5ef669de080814067961f28357256e8fe27544f4
current transparent-background 1.2.9
Installing requirements for Mov2mov
Installing requirements for imageio-ffmpeg
Launching Web UI with arguments: --theme dark --xformers --api --autolaunch --listen
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-11-07 10:52:56,028 - ControlNet - INFO - ControlNet v1.1.406
ControlNet preprocessor location: E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\annotator\downloads
2023-11-07 10:52:56,206 - ControlNet - INFO - ControlNet v1.1.406
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [d2b9e5240c] from E:\AIProject\sd-webui-aki-v4.4\models\Stable-diffusion\zavychromaxl_v21.safetensors
Running on local URL:  http://0.0.0.0:7860
Creating model from config: E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\configs\inference\sd_xl_base.yaml

To create a public link, set `share=True` in `launch()`.
Startup time: 36.4s (prepare environment: 12.0s, import torch: 5.8s, import gradio: 2.3s, setup paths: 1.0s, initialize shared: 0.4s, other imports: 1.2s, setup codeformer: 0.2s, load scripts: 4.5s, create ui: 1.6s, gradio launch: 6.8s, app_started_callback: 0.5s).
Applying attention optimization: xformers... done.
Model loaded in 32.2s (load weights from disk: 2.5s, create model: 0.5s, apply weights to model: 25.3s, apply half(): 0.2s, move model to device: 0.1s, load textual inversion embeddings: 2.1s, calculate empty prompt: 1.2s).
2023-11-07 10:56:03,823 - ControlNet - INFO - Loading model: control_v11p_sd21_depth [2722c7d7]
2023-11-07 10:56:03,885 - ControlNet - INFO - Loaded state_dict from [E:\AIProject\sd-webui-aki-v4.4\models\ControlNet\control_v11p_sd21_depth.safetensors]
2023-11-07 10:56:03,886 - ControlNet - INFO - controlnet_default_config
2023-11-07 10:56:06,930 - ControlNet - INFO - ControlNet model control_v11p_sd21_depth [2722c7d7] loaded.
2023-11-07 10:56:07,056 - ControlNet - INFO - Loading preprocessor: depth
2023-11-07 10:56:07,057 - ControlNet - INFO - preprocessor resolution = 512
2023-11-07 10:56:35,575 - ControlNet - INFO - ControlNet Hooked - Time = 33.35104775428772
*** Error completing request
*** Arguments: ('task(kf1130rm672urp1)', 'clownfish,coral reef,bubble,kelp,', '3d,realistic,badhandv4:1.4,EasyNegative,ng_deepnegative_v1_75t,bad anatomy,futa,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,monochrome,grayscale,(pointed chin),skin spots,acnes,skin blemishes(fat:1.2),facing away,looking away,', [], 20, 'Euler a', 1, 1, 7, 768, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000235FDF8C820>, 0, False, '', 0.8, 3483093119, False, -1, 0, 0, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000023581872500>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000023581872650>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000023581908FD0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000002358190B070>, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False) {}
    Traceback (most recent call last):
      File "E:\AIProject\sd-webui-aki-v4.4\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\AIProject\sd-webui-aki-v4.4\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 451, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_models_xl.py", line 37, in apply_model
        return self.model(x, t, cond)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 853, in forward_webui
        raise e
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 850, in forward_webui
        return forward(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\hook.py", line 591, in forward
        control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
        return self.control_model(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions\sd-webui-controlnet\scripts\cldm.py", line 314, in forward
        h = module(h, emb, context)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward
        x = layer(x, context)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 627, in forward
        x = block(x, context=context[i])
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 459, in forward
        return checkpoint(
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 167, in checkpoint
        return func(*inputs)
      File "E:\AIProject\sd-webui-aki-v4.4\repositories\generative-models\sgm\modules\attention.py", line 478, in _forward
        self.attn2(
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\modules\sd_hijack_optimizations.py", line 486, in xformers_attention_forward
        k_in = self.to_k(context_k)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AIProject\sd-webui-aki-v4.4\extensions-builtin\Lora\networks.py", line 429, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "E:\AIProject\sd-webui-aki-v4.4\python\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x2048 and 1024x320)
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Additional information

No response

sdbds commented 11 months ago

you use SDXL model then use SD1.5 controlnet

TomYule commented 11 months ago

you use SDXL model then use SD1.5 controlnet

No, I get the same error when I use control_v11p_sd21_depth

sdbds commented 11 months ago

you use SDXL model then use SD1.5 controlnet

No, I get the same error when I use control_v11p_sd21_depth

you should use SDXL controlnet,not SD1.5 or 2.1 model

TomYule commented 11 months ago

Is this it? https://huggingface.co/lllyasviel/sd_control_collection/tree/main