Closed ostap667inbox closed 9 months ago
All reference preprocessors now don't work in TXT2IMG and IMG2IMG
should have worked
webui: 1.7.0 controlnet: 1.1.430
Google Chrome
-enable-insecure-extension-access --allow-code --listen --theme=dark --xformers --opt-split-attention --medvram --medvram-sdxl --api --autolaunch --update-check --update-all-extensions --no-gradio-queue
2024-01-15 07:41:41,258 - ControlNet - INFO - unit_separate = False, style_align = False 2024-01-15 07:41:53,814 - ControlNet - INFO - Loading preprocessor: reference_adain 2024-01-15 07:41:53,814 - ControlNet - INFO - preprocessor resolution = 1024 2024-01-15 07:41:53,914 - ControlNet - INFO - ControlNet Hooked - Time = 12.659957885742188 0%| | 0/35 [00:00<?, ?it/s]2024-01-15 07:41:56,461 - ControlNet - INFO - ControlNet used torch.float32 VAE to encode torch.Size([1, 4, 128, 128]). 0%| | 0/35 [00:02<?, ?it/s] *** Error completing request *** Arguments: ('task(wgzdmbbckadnr06)', 'photo of nature, real life location, photo, detail, panorama, wide', 'cartoon, painting, low res, bad quality, text, fisheye', [], 35, 'DPM++ SDE Karras', 1, 1, 5, 1024, 1024, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001C1E9525B40>, 0, False, '', 0.8, None, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 0, 1688, 8, 'Lanczos', 'Lanczos', 0.22, 0.22, 'cinematic color grading', 'low details, blurry', 1, 'Noise sync (sharp)', 0, 0.05, 0, 'DPM++ 2M SDE', False, 0, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 2048, 128, True, True, True, False, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, 'MEAN', 'AD', 1, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001C1EA6B5870>, UiControlNetUnit(enabled=True, module='reference_adain', model='None', weight=1, image={'image': array([[[ 66, 109, 158], *** [ 66, 110, 158], *** [ 66, 110, 158], *** ..., *** [ 93, 126, 170], *** [ 93, 126, 170], *** [ 93, 126, 170]], *** *** [[ 67, 110, 158], *** [ 67, 110, 158], *** [ 67, 110, 158], *** ..., *** [ 94, 127, 170], *** [ 94, 127, 170], *** [ 94, 127, 170]], *** *** [[ 67, 110, 158], *** [ 67, 110, 158], *** [ 67, 110, 158], *** ..., *** [ 93, 126, 170], *** [ 93, 126, 170], *** [ 93, 126, 170]], *** *** ..., *** *** [[136, 120, 48], *** [136, 120, 49], *** [137, 122, 50], *** ..., *** [152, 133, 71], *** [149, 129, 69], *** [156, 135, 76]], *** *** [[137, 120, 49], *** [138, 122, 51], *** [139, 123, 52], *** ..., *** [152, 132, 71], *** [149, 128, 69], *** [159, 138, 79]], *** *** [[152, 135, 63], *** [149, 133, 61], *** [148, 132, 60], *** ..., *** [154, 134, 73], *** [152, 132, 72], *** [162, 142, 81]]], dtype=uint8), 'mask': array([[[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** ..., *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]], *** *** [[0, 0, 0], *** [0, 0, 0], *** [0, 0, 0], *** ..., *** [0, 0, 0], *** [0, 0, 0], *** [0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=0.5, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='My prompt is more important', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, False, True, 3, 4, 0.07, 0.14, 'bicubic', 0.5, 2, True, True, True, False, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 'Matrix', 'Columns', 'Mask', 'Prompt', '1,1', '0.2', False, False, False, 'Attention', [False], '0', '0', '0.4', None, '0', '0', False, False, False, 0, None, [], 0, False, [], [], False, 0, 1, False, False, 0, None, [], -2, False, [], False, 0, None, None, True, 0.3, 'Latent', 0.55, 0.3, 0.2, 0.2, [], False, 1.5, 1.2, False, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, [], 30, '', 4, [], 1, '', '', '', '') {} Traceback (most recent call last): File "C:\stable-diffusion-webui\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "C:\stable-diffusion-webui\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "C:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img processed = processing.process_images(p) File "C:\stable-diffusion-webui\modules\processing.py", line 755, in process_images res = process_images_inner(p) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "C:\stable-diffusion-webui\modules\processing.py", line 889, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 435, in process_sample return process.sample_before_CN_hack(*args, **kwargs) File "C:\stable-diffusion-webui\modules\processing.py", line 1163, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "C:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde denoised = model(x, sigmas[i] * s_in, **extra_args) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "C:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "C:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__ return self.__orig_func(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 845, in forward_webui raise e File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 842, in forward_webui return forward(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 727, in forward outer.original_forward( File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 845, in forward_webui raise e File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 842, in forward_webui return forward(*args, **kwargs) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 727, in forward outer.original_forward( File "C:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "C:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "C:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 889, in hacked_basic_transformer_inner_forward self_attn1 = self.attn1(x_norm1, context=self_attention_context) File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\stable-diffusion-webui\extensions-builtin\hypertile\hypertile.py", line 307, in wrapper out = params.forward(x, *args[1:], **kwargs) File "C:\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v)) File "C:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention return _memory_efficient_attention( File "C:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention return _memory_efficient_attention_forward( File "C:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _memory_efficient_attention_forward out, *_ = op.apply(inp, needs_gradient=False) File "C:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 235, in apply ) = _convert_input_format(inp) File "C:\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\flash.py", line 177, in _convert_input_format key=key.reshape([batch * seqlen_kv, num_heads, head_dim_q]), RuntimeError: shape '[65536, 8, 40]' is invalid for input of size 5242880 ---
### Additional information _No response_
Disable hypertile seems to fix the issue according to https://github.com/Mikubill/sd-webui-controlnet/issues/2329.
Is there an existing issue for this?
What happened?
All reference preprocessors now don't work in TXT2IMG and IMG2IMG
Steps to reproduce the problem
What should have happened?
should have worked
Commit where the problem happens
webui: 1.7.0 controlnet: 1.1.430
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of enabled extensions
Console logs