lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.61k stars 850 forks source link

Error Using v1.5 Model with ControlNet Tile Controlnet Mode Set and Multidiffusion with Tile Batch Size at the Same Time #2238

Open kmacmcfarlane opened 3 weeks ago

kmacmcfarlane commented 3 weeks ago

Repro steps using integrated ControlNet and Multidiffusion:

  1. load a v1.5 model in sd mode into img2img
  2. enable ControlNet node 0 (tile, control_v11f1e_sd15_tile [a371b31b], My prompt is more important or ControlNet is more important)
  3. enable Multiduffusion (batch size > 1)

Result

RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0

2024-10-31 14:13:16,316 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-10-31 14:13:16,426 - ControlNet - INFO - Using preprocessor: tile_resample
2024-10-31 14:13:16,426 - ControlNet - INFO - preprocessor resolution = 1536
2024-10-31 14:13:16,447 - ControlNet - INFO - Current ControlNet ControlNetPatcher: /home/rt/ai/models/stable-diffusion/ControlNet/v1.5/control_v11f1e_sd15_tile.pth
[Unload] Trying to free 14163.61 MB for cuda:0 with 1 models keep loaded ... Current free memory is 17865.67 MB ... Done.
2024-10-31 14:13:16,941 - ControlNet - INFO - ControlNet Method tile_resample patched.
[Unload] Trying to free 6009.88 MB for cuda:0 with 0 models keep loaded ... Current free memory is 17960.11 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 17960.11 MB, Model Require: 0.00 MB, Previously Loaded: 1639.41 MB, Inference Require: 1021.00 MB, Remaining: 16939.11 MB, All loaded to GPU.
[Memory Management] Target: ControlNet, Free GPU: 17960.11 MB, Model Require: 0.00 MB, Previously Loaded: 689.09 MB, Inference Require: 1021.00 MB, Remaining: 16939.11 MB, All loaded to GPU.
Moving model(s) has taken 0.01 seconds
  0%|                                                                                                                                                                                                                                               | 0/31 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 30, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/img2img.py", line 250, in img2img_function
    processed = process_images(p)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/processing.py", line 842, in process_images
    res = process_images_inner(p)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/processing.py", line 990, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/processing.py", line 1865, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 188, in sample_img2img
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/sd_samplers_common.py", line 272, in launch_sampling
    return func()
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 188, in <lambda>
    samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/venv/lib64/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/k_diffusion/sampling.py", line 595, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/venv/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/venv/lib64/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/modules/sd_samplers_cfg_denoiser.py", line 199, in forward
    denoised, cond_pred, uncond_pred = sampling_function(self, denoiser_params=denoiser_params, cond_scale=cond_scale, cond_composition=cond_composition)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 362, in sampling_function
    denoised, cond_pred, uncond_pred = sampling_function_inner(model, x, timestep, uncond, cond, cond_scale, model_options, seed, return_full=True)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 303, in sampling_function_inner
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/sampling/sampling_function.py", line 271, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/venv/lib64/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/extensions-builtin/sd_forge_multidiffusion/lib_multidiffusion/tiled_diffusion.py", line 457, in __call__
    c_tile['control'] = c_in['control_model'].get_control(x_tile, ts_tile, c_tile, len(cond_or_uncond))
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/patcher/controlnet.py", line 339, in get_control
    return self.control_merge(None, control, control_prev, output_dtype)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/patcher/controlnet.py", line 262, in control_merge
    out = compute_controlnet_weighting(out, self)
  File "/home/rt/ai/repos/stable-diffusion-webui-forge/backend/patcher/controlnet.py", line 148, in compute_controlnet_weighting
    control[k][i] = control_signal * final_weight[:, None, None, None]
RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0
The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0