lllyasviel / ControlNet

Let us control diffusion models!
Apache License 2.0
30.12k stars 2.71k forks source link

Error using ControlNet Reference w/ Latent Couple #429

Open TheZAbides opened 1 year ago

TheZAbides commented 1 year ago

Bit of an edge case, I'm sure — and likely more so an issue with Latent Couple (I'll report there too), but thought I'd mention it... Trying to create a 910x512 image using: Clip Skip: 2 Lora: 1 Steps: 15 CFG Scale: 8 ControlNet - Reference - reference_only - My prompt is more important - Resize and Fill - reference image is 910x512 ControlNet - Canny - canny - Pixel Perfect - ControlNet is more important - Resize and Fill - reference image is 910x512 ControlNet - Depth - depth_midas - Pixel Perfect - CN is more important - Resize and Fill - reference image is 910x512 Latent Couple - 4 sections - reference image is 910x512

I'm able to get this setup to run perfectly, only if: 1) I remove the Latent Couple "AND" parts from the prompt OR 2) Turn OFF ControNet Reference

Here is the error:

Loading preprocessor: reference_only
preprocessor resolution = 512
locon load lora method
  0%|                                                                                                      | 0/15 [00:00<?, ?it/s]ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 64, 113]).
100%|█████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:05<00:00,  2.80it/s]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████| 15/15 [00:04<00:00,  3.04it/s]
Loading preprocessor: reference_only██████████████████████████████████████████████████████████████| 15/15 [00:04<00:00,  3.28it/s]
preprocessor resolution = 512
Loading model: control_v11f1p_sd15_depth [cfd03158]
Loaded state_dict from [C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\models\ControlNet\ControlNet-v1-1\control_v11f1p_sd15_depth.pth]
Loading config: C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\models\ControlNet\ControlNet-v1-1\control_v11f1p_sd15_depth.yaml
ControlNet model control_v11f1p_sd15_depth [cfd03158] loaded.
Loading preprocessor: depth
Pixel Perfect Computation:
resize_mode = ResizeMode.OUTER_FIT
raw_H = 512
raw_W = 910
target_H = 512
target_W = 904
estimation = 508.62417582417584
preprocessor resolution = 509
Loading model: control_v11p_sd15_canny [d14c016b]
Loaded state_dict from [C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\models\ControlNet\ControlNet-v1-1\control_v11p_sd15_canny.pth]
Loading config: C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\models\ControlNet\ControlNet-v1-1\control_v11p_sd15_canny.yaml
ControlNet model control_v11p_sd15_canny [d14c016b] loaded.
Loading preprocessor: canny
Pixel Perfect Computation:
resize_mode = ResizeMode.OUTER_FIT
raw_H = 512
raw_W = 910
target_H = 512
target_W = 904
estimation = 508.62417582417584
preprocessor resolution = 509
locon load lora method
  0%|                                                                                                      | 0/15 [00:00<?, ?it/s]ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 64, 113]).
  0%|                                                                                                      | 0/15 [00:01<?, ?it/s]
Error completing request
Arguments: ('task(ghn1dalb79o5w32)', '(forest sunrise with stream winding through:1.2) <lora:detailmaker:1>\nAND (giant old craggy stone with waterfalls pouring down it:0.9) (with overgrown moss hanging-vines wildflowers growing on it:1.3)\nAND (giant old tree stumps stone:0.9) (with overgrown clumps of soft green moss and creeping-vines and wildflowers growing on it:1.3) \nAND (giant cluster of vines and flowers:0.9) (with butterflies and humming birds fluttering around it it:1.3)', '(bad-artist:0.25) (EasyNegative:1) (low quality, worst quality:1.3) (text, signature, watermark:1.2) (people, person, structure, building, window, house:1.5), fantasy (fire:1.3)', [], 15, 0, False, False, 1, 1, 8, -1.0, -1.0, 0, 0, 0, False, 512, 910, False, 0.7, 2, 'R-ESRGAN 4x+ Anime6B', 0, 0, 0, 0, '', '', [], 0, '\n    <div style="padding: 10px">\n      <div>Estimated VRAM usage: <span style="color: rgb(255.00, 31.35, 4.62)">7891.27 MB / 10240 MB (77.06%)</span></div>\n      <div>(5679 MB system + 2011.16 MB used)</div>\n    </div>\n    ', False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1}, False, 'MultiDiffusion', False, True, 1024, 1024, 128, 128, 84, 1, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 128, True, True, True, False, False, '', 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000010B3F83AB90>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000010DB67F48E0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000010B3F79E230>, False, False, 'Matrix', 'Horizontal', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', False, '0', '0', '0.4', None, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 150, 0.2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 5, 24, 12.5, 1000, '', 'DDIM', 0, 64, 64, '', 64, 7.5, 0.42, 'DDIM', 64, 64, 1, 0, 92, True, True, True, False, False, False, 'midas_v21_small', None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 293, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 159, in forward
    x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=make_condition_dict([uncond], image_cond_in[-uncond.shape[0]:]))
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 628, in forward_webui
    return forward(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 531, in forward
    outer.original_forward(
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 664, in hacked_basic_transformer_inner_forward
    x = self.attn2(self.norm2(x), context=context) + x
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 515, in scaled_dot_product_no_mem_attention_forward
    return scaled_dot_product_attention_forward(self, x, context, mask)
  File "C:\Stable_Diffusion\SD-WebUI_02\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 490, in scaled_dot_product_attention_forward
    k = k_in.view(batch_size, -1, h, head_dim).transpose(1, 2)
RuntimeError: shape '[2, -1, 8, 40]' is invalid for input of size 24640
CAOTTAA commented 1 year ago

I am also getting a different error when using latent couple and control net reference... but both should be casued by mismatch dimensions

Traceback (most recent call last):

File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/call_queue.py", line 55, in f res = list(func(*args, kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/call_queue.py", line 35, in f res = func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img processed = processing.process_images(p) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/processing.py", line 620, in process_images res = process_images_inner(p) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/processing.py", line 739, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 350, in process_sample return process.sample_before_CN_hack(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/processing.py", line 992, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 433, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 275, in launch_sampling return func() ^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 433, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 597, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 174, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 114, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 140, in get_eps return self.inner_model.apply_model(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_utils.py", line 26, in call return self.__sub_func(self.__orig_func, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_unet.py", line 45, in apply_model return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, kwargs).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1548, in _call_impl result = forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 661, in forward_webui return forward(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 564, in forward outer.original_forward( File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward h = module(h, emb, context) ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward x = layer(x, context) ^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 334, in forward x = block(x, context=context[i]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/autograd/function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 136, in forward output_tensors = ctx.run_function(ctx.input_tensors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/hook.py", line 697, in hacked_basic_transformer_inner_forward x = self.attn2(self.norm2(x), context=context) + x ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1502, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/venv-torch-nightly/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 350, in split_cross_attention_forward_invokeAI r = einsum_op(q, k, v) ^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 325, in einsum_op return einsum_op_mps_v2(q, k, v) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 297, in einsum_op_mps_v2 return einsum_op_slice_0(q, k, v, 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tongtongcao/Documents/GIT_REPO/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 274, in einsum_op_slice_0 r[i:end] = einsum_op_compvis(q[i:end], k[i:end], v[i:end]) ~^^^^^^^ RuntimeError: The expanded size of the tensor (1) must match the existing size (0) at non-singleton dimension 0. Target sizes: [1, 6144, 40]. Tensor sizes: [0, 6144, 40]


When hittung the error in _einsum_op_slice0 function, q shape is (16,xx,xx) but k and v are (8,xx,xx)