Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
16.97k stars 1.96k forks source link

[Bug]: ControlNet [Reference] mode doesnt work!!! #2087

Closed oribakiba closed 1 year ago

oribakiba commented 1 year ago

Is there an existing issue for this?

What happened?

I reinstalled SD and only put ControlNet on it, in order to test. But reference mode doesnt work anymore!

Steps to reproduce the problem

  1. Go to SD
  2. Install CN
  3. Use Reference mode on CN and Generate

What should have happened?

should have worked

Commit where the problem happens

webui: vers. 1.6.0 controlnet: vers. 1.1.409

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--medvram --autolaunch --no-half-vae

List of enabled extensions

2023-09-09

Console logs

Traceback (most recent call last):
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\img2img.py", line 208, in img2img
        processed = process_images(p)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 451, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\processing.py", line 1528, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 191, in forward
        x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=make_condition_dict(uncond, image_cond_in[-uncond.shape[0]:]))
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
        result = forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 853, in forward_webui
        raise e
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 850, in forward_webui
        return forward(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 739, in forward
        outer.original_forward(
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 887, in hacked_basic_transformer_inner_forward
        x = self.attn2(self.norm2(x), context=context) + x
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 266, in split_cross_attention_forward
        s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
      File "D:\STABLE DIFFUSION BKP AI\SD CN\stable-diffusion-webui\venv\lib\site-packages\torch\functional.py", line 378, in einsum
        return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
    RuntimeError: einsum(): subscript b has size 8 for operand 1 which does not broadcast with previously seen size 16

Additional information

No response

lllyasviel commented 1 year ago

I just tested and it worked well for 1.5 and xl I cannot reproduce the problem but will take a look if other people raise same problem

oribakiba commented 1 year ago

I just tested and it worked well for 1.5 and xl I cannot reproduce the problem but will take a look if other people raise same problem

Do you understand why this problem happens? because i dont get it... and if i try to use xformers and do the same thing (IMG2IMG -> ControlNet -> Reference mode). I produce this problem:

--->>>> https://github.com/Mikubill/sd-webui-controlnet/issues/2028

But i really dont understand why it is happening.

NaughtDZ commented 9 months ago

Same error here when using refrence:

2023-12-29 23:29:58,386 - ControlNet - INFO - unit_separate = False, style_align = False 2023-12-29 23:29:58,387 - ControlNet - INFO - Loading preprocessor: reference_only 2023-12-29 23:29:58,387 - ControlNet - INFO - preprocessor resolution = 552 2023-12-29 23:29:58,440 - ControlNet - INFO - ControlNet Hooked - Time = 0.057669639587402344 0%| | 0/25 [00:00<?, ?it/s]2023-12-29 23:29:58,655 - ControlNet - INFO - ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 97, 69]). 0%| | 0/25 [00:00<?, ?it/s] Error completing request Arguments: ('task(qphh1ajdtuwqqgm)', 'transparent background, 1girl, solo,goblin girl, breasts, open mouth, full body, black background, polearm, spear, long hair, teeth, bodypaint, weapon, looking at viewer, colored skin, fur trim, navel, tribal, staff, medium breasts, smile, very long hair, fangs, brown hair, holding, jewelry, tattoo, simple background', '', ['bad'], 25, 'Euler a', 1, 1, 6, 776, 552, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000200A8562A40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, None, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000200B12E1C00>, UiControlNetUnit(enabled=True, module='reference_only', model='None', weight=1, image={'image': array([[[0, 0, 1], [0, 0, 0], [1, 1, 0], ..., [1, 1, 0], [3, 0, 3], [5, 1, 1]],


[[0, 0, 1], [1, 0, 0], [1, 0, 1], ..., [1, 1, 1], [1, 1, 1], *** [2, 0, 0]],


[[0, 0, 0], [1, 0, 1], [0, 0, 1], ..., [0, 1, 0], [1, 1, 0], *** [1, 0, 0]],


*** ...,


[[1, 0, 0], [1, 1, 1], [0, 0, 1], ..., [1, 1, 2], [1, 2, 2], *** [1, 0, 0]],


[[2, 0, 3], [1, 0, 0], [1, 1, 1], ..., [1, 1, 0], [0, 2, 1], *** [0, 0, 1]],


[[1, 2, 3], [0, 0, 0], [0, 0, 0], ..., [2, 1, 1], [1, 0, 1], [3, 1, 2]]], dtype=uint8), 'mask': array([[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


*** ...,


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],


[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, '🔄', None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(args, kwargs)) File "I:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, kwargs) File "I:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "I:\stable-diffusion-webui\modules\processing.py", line 734, in process_images res = process_images_inner(p) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, *kwargs) File "I:\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 423, in process_sample return process.sample_before_CN_hack(args, kwargs) File "I:\stable-diffusion-webui\modules\processing.py", line 1142, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "I:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "I:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "I:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "I:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, *extra_args) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "I:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "I:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "I:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "I:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "I:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(args, kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 840, in forward_webui raise e File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 837, in forward_webui return forward(*args, kwargs) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 722, in forward outer.original_forward( File "I:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, *kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), args) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(args, kwargs) # type: ignore[misc] File "I:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(ctx.input_tensors) File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 884, in hacked_basic_transformer_inner_forward self_attn1 = self.attn1(x_norm1, context=self_attention_context) File "I:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, *kwargs) File "I:\stable-diffusion-webui\extensions-builtin\hypertile\hypertile.py", line 307, in wrapper out = params.forward(x, args[1:], kwargs) File "I:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v)) File "I:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha__init.py", line 192, in memory_efficient_attention return _memory_efficient_attention( File "I:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha__init__.py", line 290, in _memory_efficient_attention return _memory_efficient_attention_forward( File "I:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\init__.py", line 310, in _memory_efficient_attentionforward out, * = op.apply(inp, needs_gradient=False) File "I:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 235, in apply ) = _convert_input_format(inp) File "I:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 177, in _convert_input_format key=key.reshape([batch * seqlen_kv, num_heads, head_dim_q]), RuntimeError: shape '[40158, 8, 40]' is invalid for input of size 4283520