cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
3.35k stars 253 forks source link

Error when using IPAdapter with Kohya Deep Shrink: The size of tensor a (405) must match the size of tensor b (1530) at non-singleton dimension 1 #97

Closed xxlukexx closed 7 months ago

xxlukexx commented 8 months ago

This is only an issue when using the recently-added Deep Shrink node, everything works fine if that is bypassed or removed.


Error occurred when executing KSampler:

The size of tensor a (405) must match the size of tensor b (1530) at non-singleton dimension 1

File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1237, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 711, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 617, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 556, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 745, in sample_lcm
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 277, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 267, in forward
return self.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 264, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 252, in sampling_function
cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 230, in calc_cond_uncond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 68, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 628, in forward
h = forward_timestep_embed(module, h, emb, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 560, in forward
x = block(x, context=context[i], transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\venv\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 390, in forward
return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 123, in checkpoint
return func(*inputs)
^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 489, in _forward
n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 290, in __call__
out_ip = out_ip * mask_downsample
~~~~~~~^~~~~~~~~~~~~~~~~
cubiq commented 8 months ago

have you updated to the latest version? I think I fixed this already

xxlukexx commented 8 months ago

I think I'm up to date, but might be doing something wrong with git as I'm still getting the error.

git log -1 gives me

commit 2255ae76602d5b122fa04d19bda650fa40899c02 (HEAD -> main, origin/main, origin/HEAD)
Author: matt3o <matt3o@gmail.com>
Date:   Mon Nov 20 20:23:57 2023 +0100

    fix compatibility with deep shrink
cubiq commented 8 months ago

I need to see the workflow

xxlukexx commented 8 months ago

Sure. This workflow is a WIP and a bit of a state. I've deleted all the unnecessary stuff for this error.

Generating from the current state causes the error for me. Bypassing the deep shrink node works.

Let me know if there's anything else I can send you, or anything you'd like me to test.

Cheers

ipa_deepshrink_mask_error.json

cubiq commented 8 months ago

are you using lcm sampler with lcm?

xxlukexx commented 8 months ago

In that workflow I'm only patching in LCM for the upscale pass. The first pass (where the error occurs) isn't using it.

The error occurs both with LCM (so that's the LoRA and ModelSamplingDiscrete nodes) and without.

cubiq commented 8 months ago

I'm sorry there are really too many variables in that workflow I would really appreciate if you could streamline it to the minimum number of nodes that actually cause the error... using default nodes as much as possible

laksjdjf commented 8 months ago

The deep shrink downscale rate can be any value, not just 2.0. The current implementation cannot handle this.

cubiq commented 8 months ago

it works in all the tests I've done but I'm sure there are cases that trigger this issue. I just need to understand which ones. I believe I know what needs to be fixed, but to be sure I need to be able to replicate it

cubiq commented 8 months ago

This is a minimal workflow that is actually working no matter what values I enter. shrink.json

just let me know how to break it :smile:

ComfyUI_temp_mtuts_00034_

Update: Okay, got it, there's a certain threshold to the downscale factor, 1.99 still works (maybe there's some rounding?)

xxlukexx commented 8 months ago

I'm sorry there are really too many variables in that workflow I would really appreciate if you could streamline it to the minimum number of nodes that actually cause the error... using default nodes as much as possible

I can only apologise for my spaghetti workflow! :)

I'll try and reduce it as much as possible and see if I get the error. I won't get a chance to look at it until this evening but I'll update you on what I find.

laksjdjf commented 8 months ago

Not because of the mask?

cubiq commented 8 months ago

I'll try and reduce it as much as possible and see if I get the error. I won't get a chance to look at it until this evening but I'll update you on what I find.

don't worry, I've found the problem. I was lowering the factor by a too small of a value. If you set it to like 1.5 or 4 with a mask, it will breaks. It doesn't have the highest priority as it works with the default but I'll try to work on this issue or maybe laksjdjf will