Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
17.07k stars 1.96k forks source link

[Bug]: All Reference Preprocessors are generating errors for SD1.5 and SDXL models #2329

Closed Shangooriginal closed 10 months ago

Shangooriginal commented 11 months ago

Is there an existing issue for this?

What happened?

All the Reference Preprocessors are generating errors when I try to use them either with SD1.5 and SDXL models.

Steps to reproduce the problem

I upload picture for reference and adjust the settings, click generate and error message appears. See below

RuntimeError: shape '[81920, 8, 40]' is invalid for input of size 3276800

What should have happened?

It should just work and generate a new reference image output

Commit where the problem happens

webui: version: [v1.7.0-RC-4-g120a84bd] controlnet: [a13bd2fe]

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

x-formers

List of enabled extensions

Non applicable

Console logs

RuntimeError: shape '[81920, 8, 40]' is invalid for input of size 3276800

*** Error completing request
*** Arguments: ('task(2bl0opahhbql4kh)', 'Fashion model, Asian ethnicity, futuristic urban wear, metallic accents, sleek lines, white background', '', ['Easy_Bad_NegPrompt'], 30, 'DPM++ 2M SDE Karras', 1, 1, 7, 640, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001CA12ADA560>, 0, -1, False, -1, 0, 0, 0, False, '', 0.8, False, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 3072, 192, True, True, True, False, False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, UiControlNetUnit(enabled=True, module='reference_adain+attn', model='None', weight=1, image={'image': array([[[193, 193, 194],
***         [193, 193, 196],
***         [191, 192, 194],
***         ...,
***         [183, 184, 183],
***         [182, 184, 184],
***         [183, 182, 183]],
***
***        [[192, 191, 193],
***         [193, 193, 195],
***         [193, 191, 195],
***         ...,
***         [183, 183, 184],
***         [183, 183, 184],
***         [182, 182, 184]],
***
***        [[191, 190, 193],
***         [191, 192, 193],
***         [193, 193, 194],
***         ...,
***         [184, 184, 185],
***         [183, 184, 183],
***         [184, 182, 183]],
***
***        ...,
***
***        [[225, 224, 229],
***         [224, 223, 228],
***         [224, 224, 227],
***         ...,
***         [223, 225, 228],
***         [222, 224, 227],
***         [223, 223, 228]],
***
***        [[225, 225, 229],
***         [224, 224, 227],
***         [224, 224, 228],
***         ...,
***         [221, 223, 225],
***         [221, 222, 225],
***         [222, 223, 226]],
***
***        [[225, 224, 229],
***         [224, 224, 229],
***         [225, 223, 230],
***         ...,
***         [223, 223, 227],
***         [222, 224, 226],
***         [222, 222, 224]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=0.5, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=64, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, True, 3, 4, 0.15, 0.3, 'bicubic', 0.5, 2, True, False, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {}
    Traceback (most recent call last):
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\processing.py", line 734, in process_images
        res = process_images_inner(p)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 423, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\processing.py", line 1142, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 840, in forward_webui
        raise e
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 837, in forward_webui
        return forward(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 722, in forward
        outer.original_forward(
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 884, in hacked_basic_transformer_inner_forward
        self_attn1 = self.attn1(x_norm1, context=self_attention_context)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\extensions-builtin\hypertile\hypertile.py", line 307, in wrapper
        out = params.forward(x, *args[1:], **kwargs)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
        return _memory_efficient_attention(
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _memory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 235, in apply
        ) = _convert_input_format(inp)
      File "C:\Users\xxxxx\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 177, in _convert_input_format
        key=key.reshape([batch * seqlen_kv, num_heads, head_dim_q]),
    RuntimeError: shape '[81920, 8, 40]' is invalid for input of size 3276800

Additional information

No response

huchenlei commented 11 months ago

Can you provide more information on how to reproduce the issue? Running reference_only seems fine in my local environment.

garugann commented 10 months ago

I also encountered a similar error. It seems to occur when using --xformers in version 1.7. It seems that the error does not occur if vemv is deleted and --xformers is not used. I have not thoroughly verified it, and it is not exactly the same error, so this may not be the cause. I would be happy if it is helpful.

huchenlei commented 10 months ago

Which xformers version do you use @garugann. With --xformers flag in 1.7.0 A1111, it seems to work fine for me.

garugann commented 10 months ago

version: v1.7.0  •  python: 3.10.11  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  •  checkpoint: 735df1f05d

It seems that an error occurs in this environment. To be honest, I don't know if this is the correct cause. In the meantime, I can generate Reference successfully when --xformers is n/a.

huchenlei commented 10 months ago

I have following env and reference is worknig fine: version: v1.7.0-RC-5-gf92d6149  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  •  checkpoint: 79e42fb744

garugann commented 10 months ago

While I was comparing it to a completely new environment, I realized that the config.json file might be outdated. After deleting it and resetting the config, it seems to be operating without any problems even in the same environment. version: v1.7.0  •  python: 3.10.11  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2  •

garugann commented 10 months ago

I think it might be Hypertile. If I turn it on or off in the settings, the error appears or disappears.

zongmi commented 9 months ago

Traceback (most recent call last): File "D:\gongxiang\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) ^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 435, in process_sample return process.sample_before_CN_hack(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\processing.py", line 1142, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() ^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model return self.model(x, t, cond) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl result = forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda args, kwargs: self(*args, kwargs)) ^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 845, in forward_webui raise e File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 842, in forward_webui return forward(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 720, in forward outer.original_forward( File "D:\gongxiang\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 993, in forward h = module(h, emb, context) ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 898, in hacked_group_norm_forward x = self.original_forward_cn_hijack(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\openaimodel.py", line 100, in forward x = layer(x, context) ^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 627, in forward x = block(x, context=context[i]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 459, in forward return checkpoint( ^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 165, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\autograd\function.py", line 539, in apply return super().apply(args, kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 182, in forward output_tensors = ctx.run_function(ctx.input_tensors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 889, in hacked_basic_transformer_inner_forward self_attn1 = self.attn1(x_norm1, context=self_attention_context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\extensions-builtin\hypertile\hypertile.py", line 307, in wrapper out = params.forward(x, args[1:], kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\gongxiang\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha__init.py", line 223, in memory_efficient_attention return _memory_efficient_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha__init__.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\init__.py", line 334, in _memory_efficient_attention_forward inp.validate_inputs() File "C:\Users\Owner\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\common.py", line 197, in validate_inputs raise ValueError( ValueError: Incompatible shapes for attention inputs: query.shape: torch.Size([4, 512, 10, 64]) key.shape : torch.Size([2, 1024, 10, 64]) value.shape: torch.Size([2, 1024, 10, 64]) HINT: We don't support broadcasting, please use expand yourself before calling memory_efficient_attention if you need to

zongmi commented 9 months ago

The same error comes from the hypertile call. At the moment I have no intention of tracking down whether it is a problem with hypertile or python 3.11. Either I roll back the environment to xformers-17, which I don't want to do. I currently choose to turn off hypertile to avoid this error. Maybe I should file a bug with webui?

negu63 commented 9 months ago

Environment os: window10 browser: firefox gpu: nvidia rtx 4070 ti cpu: intel cuda: 11.8 cudnn: 8.9.7

Webui version: v1.7.0 python: 3.10.13 torch: 2.0.1+cu118 xformers: 0.0.20 gradio: 3.41.2 checkpoint: 1449e5b0b9 arguments: --xformers --xformers-flash-attention --no-half-vae

Controlnet version: v1.1.440 checkpoint: 9a5f2883 preprocessor: canny model: controlnetxlCNXL_bdsqlszCanny [a74daa41]

In addition to canny, I get an error

Model AnimagineXL_V3 (base on SDXL 1.0)

Vae sdxl-vae


I'm experiencing something similar, albeit a different error.

I'm using Tiled VAE with Hypertile and DeepCache.

I don't use Lora or Embedding.

In my case, turning off xformers was the same.

When using the SDXL model, if I enable Hypertile or DeepCache and use ControlNet, I get an error.

The error is as follows

RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 4096 but got size 2048 for tensor number 1 in the list.

📃Error Message (full) --- 2024-02-13 19:38:22,188 - ControlNet - INFO - unit_separate = False, style_align = False 2024-02-13 19:38:22,196 - ControlNet - INFO - Loading model from cache: controlnetxlCNXL_bdsqlszOpenpose [23893c0e] 2024-02-13 19:38:22,196 - ControlNet - INFO - Loading model: controlnetxlCNXL_bdsqlszOpenpose [23893c0e] 2024-02-13 19:38:22,393 - ControlNet - INFO - Loaded state_dict from [C:\stable-diffusion-webui\models\ControlNet\controlnetxlCNXL_bdsqlszOpenpose.safetensors] 2024-02-13 19:38:23,032 - ControlNet - INFO - ControlNet model controlnetxlCNXL_bdsqlszOpenpose [23893c0e](ControlModelType.Controlllite) loaded. 2024-02-13 19:38:23,053 - ControlNet - INFO - Using preprocessor: dw_openpose_full 2024-02-13 19:38:23,053 - ControlNet - INFO - preprocessor resolution = 512 2024-02-13 19:38:23,092 - ControlNet - INFO - ControlNet Hooked - Time = 0.9101603031158447 0%| | 0/20 [00:00, choke blush +++ (white collared_shirt) (saxe_blue loose_necktie) (plaid_pleated_mini_skirt skyblue skirt), extremely quality extremely detailed, illustration, contrapposto, cute anime face cinematic lighting cinematic angle', 'lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name worst quality, low quality', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], , 0, False, '', 0.8, 442993243, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, True, 1536, 96, True, True, True, True, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', UiControlNetUnit(enabled=True, module='dw_openpose_full', model='controlnetxlCNXL_bdsqlszOpenpose [23893c0e]', weight=1, image={'image': array([[[ 88, 91, 125], *** [ 88, 92, 127], *** [ 89, 91, 125], *** ..., *** [131, 129, 158], *** [131, 131, 158], *** [131, 130, 159]], *** *** [[ 88, 91, 125], *** [ 89, 91, 127], *** [ 89, 90, 125], *** ..., *** [130, 129, 158], *** [131, 129, 158], *** [131, 129, 159]], *** *** [[ 88, 90, 124], *** [ 88, 90, 124], *** [ 89, 90, 124], *** ..., *** [130, 128, 157], *** [130, 129, 157], *** [130, 128, 158]], *** *** ..., *** *** [[150, 128, 146], *** [168, 145, 167], *** [185, 158, 181], *** ..., *** [ 62, 54, 65], *** [ 62, 54, 65], *** [ 62, 54, 65]], *** *** [[139, 116, 132], *** [155, 132, 154], *** [183, 161, 182], *** ..., *** [ 62, 54, 65], *** [ 62, 54, 67], *** [ 62, 54, 66]], *** *** [[145, 126, 134], *** [145, 124, 141], *** [170, 147, 166], *** ..., *** [ 63, 53, 64], *** [ 63, 54, 66], *** [ 63, 55, 65]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** ..., *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, False, False, False, '#000000', False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "C:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "C:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "C:\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "C:\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 438, in process_sample return process.sample_before_CN_hack(*args, **kwargs) File "C:\stable-diffusion-webui\modules\processing.py", line 1142, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b])) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "C:\stable-diffusion-webui\modules\sd_models_xl.py", line 37, in apply_model return self.model(x, t, cond) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward return self.diffusion_model( File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-deepcache-standalone\deepcache.py", line 133, in hijacked_unet_forward h = forward_timestep_embed(module, h, emb, context) File "C:\stable-diffusion-webui\extensions\sd-webui-deepcache-standalone\scripts\forward_timestep_embed_patch.py", line 39, in forward_timestep_embed x = layer(x, context) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 627, in forward x = block(x, context=context[i]) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 459, in forward return checkpoint( File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 165, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\diffusionmodules\util.py", line 182, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "C:\stable-diffusion-webui\repositories\generative-models\sgm\modules\attention.py", line 467, in _forward self.attn1( File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\extensions-builtin\hypertile\hypertile.py", line 307, in wrapper out = params.forward(x, *args[1:], **kwargs) File "C:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 482, in xformers_attention_forward q_in = self.to_q(x) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_lllite.py", line 215, in forward hack = hack + module(x, current_h_shape) * weight File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet_lllite.py", line 99, in forward cx = torch.cat([cx, self.down(x)], dim=1 if self.is_conv2d else 2) RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 4096 but got size 2048 for tensor number 1 in the list. ---