Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
16.98k stars 1.96k forks source link

[Bug]: Unhooking ControlNet not behaving as expected #2082

Closed ThereforeGames closed 11 months ago

ThereforeGames commented 1 year ago

Is there an existing issue for this?

What happened?

Hi,

In the past, I was able to programmatically disable all CN units before executing an arbitrary modules.img2img.img2img() task from my own extension. Here's the relevant code for this:

elif script_title == "controlnet":
    # Update the controlnet script args with a list of 0 units
    cn_path = self.Unprompted.extension_path(self.Unprompted.Config.stable_diffusion.controlnet_name)
    if cn_path:
        cn_module = self.Unprompted.import_file(f"{self.Unprompted.Config.stable_diffusion.controlnet_name}.internal_controlnet.external_code", f"{cn_path}/internal_controlnet/external_code.py")
        cn_module.update_cn_script_in_processing(self.Unprompted.main_p, [])
        self.log.debug(f"{success_string} ControlNet")
    else:
        self.log.error("Could not communicate with ControlNet.")

As of a couple weeks ago, the ControlNet hook is throwing an error related to the fact that the appended img2img task receives a width or height value different than that of the main txt2img task fired from the WebUI. Console output is provided below.

What do I need to do to correctly disable or unhook ControlNet in the latest version?

Thank you.

Steps to reproduce the problem

  1. Install Unprompted and rename extension folder to _unprompted so that the two extensions can run in the correct order
  2. Enable one CN unit and load a Single Image (I'm using a 512x758 PNG with OpenPose and Pixel Perfect)
  3. Change the image dimensions to 512x768 (our prompt will later modify this to 512x512 for processing a subsequent image)
  4. Run txt2img with the following prompt for example: photo of a person[after][zoom_enhance][/after]
  5. Observe console output

What should have happened?

When no CN units are enabled, I expect to be able to queue up additional img2img or txt2img tasks with arbitrary settings.

Commit where the problem happens

webui: 1.6.0 controlnet: 1.1.408

What browsers do you use to access the UI ?

Brave

Command Line Arguments

--no-half-vae --no-half --xformers --medvram --disable-nan-check

List of enabled extensions

Unprompted, ControlNet

Console logs

2023-09-07 21:18:54,890  (ERROR)    [Unprompted.img2img] Exception while running the img2img task
Traceback (most recent call last):
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\_unprompted/shortcodes\stable_diffusion\img2img.py", line 56, in run_atomic
    img2img_result = modules.img2img.img2img(
  File "T:\code\python\automatic-stable-diffusion-webui\modules\img2img.py", line 208, in img2img
    processed = process_images(p)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\processing.py", line 732, in process_images
    res = process_images_inner(p)
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\processing.py", line 1528, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
    return func()
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 188, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 169, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "T:\code\python\automatic-stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 853, in forward_webui
    raise e
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 850, in forward_webui
    return forward(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 591, in forward
    control = param.control_model(x=x_in, hint=hint, timesteps=timesteps, context=context, y=y)
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 31, in forward
    return self.control_model(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "T:\code\python\automatic-stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 311, in forward
    h += guided_hint
RuntimeError: The size of tensor a (64) must match the size of tensor b (96) at non-singleton dimension 2

Additional information

No response

piaolingxue commented 6 months ago

still has this problem in v1.1.443