Closed oribakiba closed 1 year ago
I just tested and it worked well for 1.5 and xl I cannot reproduce the problem but will take a look if other people raise same problem
I just tested and it worked well for 1.5 and xl I cannot reproduce the problem but will take a look if other people raise same problem
Do you understand why this problem happens? because i dont get it... and if i try to use xformers and do the same thing (IMG2IMG -> ControlNet -> Reference mode). I produce this problem:
--->>>> https://github.com/Mikubill/sd-webui-controlnet/issues/2028
But i really dont understand why it is happening.
Same error here when using refrence:
2023-12-29 23:29:58,386 - ControlNet - INFO - unit_separate = False, style_align = False 2023-12-29 23:29:58,387 - ControlNet - INFO - Loading preprocessor: reference_only 2023-12-29 23:29:58,387 - ControlNet - INFO - preprocessor resolution = 552 2023-12-29 23:29:58,440 - ControlNet - INFO - ControlNet Hooked - Time = 0.057669639587402344 0%| | 0/25 [00:00<?, ?it/s]2023-12-29 23:29:58,655 - ControlNet - INFO - ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 97, 69]). 0%| | 0/25 [00:00<?, ?it/s] Error completing request Arguments: ('task(qphh1ajdtuwqqgm)', 'transparent background, 1girl, solo,goblin girl, breasts, open mouth, full body, black background, polearm, spear, long hair, teeth, bodypaint, weapon, looking at viewer, colored skin, fur trim, navel, tribal, staff, medium breasts, smile, very long hair, fangs, brown hair, holding, jewelry, tattoo, simple background', '', ['bad'], 25, 'Euler a', 1, 1, 6, 776, 552, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000200A8562A40>, 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, 'MultiDiffusion', False, True, 1024, 1024, 96, 96, 48, 4, 'None', 2, False, 10, 1, 1, 64, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 1536, 96, True, True, True, False, None, <scripts.animatediff_ui.AnimateDiffProcess object at 0x00000200B12E1C00>, UiControlNetUnit(enabled=True, module='reference_only', model='None', weight=1, image={'image': array([[[0, 0, 1], [0, 0, 0], [1, 1, 0], ..., [1, 1, 0], [3, 0, 3], [5, 1, 1]],
[[0, 0, 1], [1, 0, 0], [1, 0, 1], ..., [1, 1, 1], [1, 1, 1], *** [2, 0, 0]],
[[0, 0, 0], [1, 0, 1], [0, 0, 1], ..., [0, 1, 0], [1, 1, 0], *** [1, 0, 0]],
*** ...,
[[1, 0, 0], [1, 1, 1], [0, 0, 1], ..., [1, 1, 2], [1, 2, 2], *** [1, 0, 0]],
[[2, 0, 3], [1, 0, 0], [1, 1, 1], ..., [1, 1, 0], [0, 2, 1], *** [0, 0, 1]],
[[1, 2, 3], [0, 0, 0], [0, 0, 0], ..., [2, 1, 1], [1, 0, 1], [3, 1, 2]]], dtype=uint8), 'mask': array([[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],
*** ...,
[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],
[[0, 0, 0], [0, 0, 0], [0, 0, 0], ..., [0, 0, 0], [0, 0, 0], *** [0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
...,
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]]], dtype=uint8)}, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 0, 1, 0, 'Version 2', 1.2, 0.9, 0, 0.5, 0, 1, 1.4, 0.2, 0, 0.5, 0, 1, 1, 1, 0, 0.5, 0, 1, '🔄', None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, 5, 'all', 'all', 'all', '', '', '', '1', 'none', False, '', '', 'comma', '', True, '', '20', 'all', 'all', 'all', 'all', 0, '', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "I:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(args, kwargs))
File "I:\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, kwargs)
File "I:\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "I:\stable-diffusion-webui\modules\processing.py", line 734, in process_images
res = process_images_inner(p)
File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, *kwargs)
File "I:\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "I:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 423, in process_sample
return process.sample_before_CN_hack(args, kwargs)
File "I:\stable-diffusion-webui\modules\processing.py", line 1142, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "I:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs))
File "I:\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
return func()
File "I:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in
Is there an existing issue for this?
What happened?
I reinstalled SD and only put ControlNet on it, in order to test. But reference mode doesnt work anymore!
Steps to reproduce the problem
What should have happened?
should have worked
Commit where the problem happens
webui: vers. 1.6.0 controlnet: vers. 1.1.409
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of enabled extensions
Console logs
Additional information
No response