cubiq / ComfyUI_IPAdapter_plus

GNU General Public License v3.0
3.93k stars 296 forks source link

error at sampling if using cosxl + ipadapter #446

Closed MoonMoon82 closed 3 months ago

MoonMoon82 commented 5 months ago

Hi!

I already updated to https://github.com/cubiq/ComfyUI_IPAdapter_plus/commit/f217b03928599488fbb5d8da7aac9f4bca4d034e but it seems it still does not work.

I just loaded the workflow from https://comfyanonymous.github.io/ComfyUI_examples/edit_models/ and added the unified loader (PLUS) and an ipadapter node but no matter what kind of setup I use, it always produces the following error while sampling.

Could you please have a look?

Thank you very much in advance!

Kind regards!

Requested to load CLIPVisionModelProjection
Loading 1 new model
Requested to load SDXL_instructpix2pix
Loading 1 new model
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!!
Traceback (most recent call last):
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 529, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 644, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 623, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in __call__
    return self.predict_noise(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 444, in predict_noise
    out = comfy.samplers.calc_cond_batch(self.inner_model, [negative_cond, middle_cond, self.conds.get("positive", None)], x, timestep, model_options)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 218, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 97, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed
    x = layer(x, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\attention_sharing.py", line 253, in forward
    return func(self, x, context, transformer_options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward
    return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint
    return func(*inputs)
           ^^^^^^^^^^^^^
  File "e:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 557, in _forward
    n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 115, in __call__
    ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 115, in <listcomp>
    ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
                      ~~~~~~~~~~~~~~~~~~^^^
IndexError: tuple index out of range

Prompt executed in 6.93 seconds

grafik

cubiq commented 5 months ago

yeah I noticed. The "edit" model doesn't work, the normal one does. I haven't looked at the code but at the moment this has a very low priority

FG-GIS commented 4 months ago

Hi there, I'm not having this specific issue, everything is working fine if I use a standard KSampler/Advanced, I'm finding issues when using IpAdapter on a Custom Sampler that implements Dual CFG Guidance.

If I connect the model pipeline only to the "basicScheduler" the IpAdapter does not influence the generation, if I do connect it to the "DualCFGGuider" node I get this error:

`Error occurred when executing SamplerCustomAdvanced:

tuple index out of range

File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 550, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 650, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 629, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 534, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, *extra_args) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 272, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 616, in call return self.predict_noise(args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 465, in predict_noise out = comfy.samplers.calc_cond_batch(self.inner_model, [negative_cond, middle_cond, self.conds.get("positive", None)], x, timestep, model_options) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 218, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 97, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffuse\lib_layerdiffusion\attention_sharing.py", line 253, in forward return func(self, x, context, transformer_options) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint return func(*inputs) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 557, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 138, in call ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) File "E:\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 138, in ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)`

Thank you in advance, I'm not expecting a fix, I just wanted to leave a trace of this bug, might help speed up your work I hope.

glyfspace commented 4 months ago

did anyone figure out a workaround for this?

MoonMoon82 commented 4 months ago

@glyfspace @cubiq idk what happened, but to me it seems it's working ! grafik grafik

I noticed that this setup is very sensitive to CFG scale and IpAdapter weight. Regarding my second example: Maybe it's better to mask the area of the ipadapter-reference image to restrict it to what you want to change. The background of the result seems to change to the background of the ipadapter-reference image.

glyfspace commented 4 months ago

The issue I think is not necessarily with cosxl - I got the normal KSampler working like this also. The issue I believe is with the DualCFG support - #562

DualCFG is pretty much required to get very good results out of cosxl edit from my understanding. So this bug is just an issue w/ DualCFG I think.

FG-GIS commented 4 months ago

Yeah the issue comes up only using DualCFGGuider Node, image The cfg_cond2_negative is the value that allows you to "stop" or limit the influence on the source image, it is set to a flat 1 in a standard KSampler. And it is really useful when using CosXL_Edit Merged models.

I get a tuple error, it may just be a shape error on the model side, I'll try to look into it to get some more info but I'm not that experienced at debugging ML code.

cubiq commented 4 months ago

this looks like an easy fix, just need to find the time :smile:

glyfspace commented 4 months ago

thank you!

glyfspace commented 3 months ago

hi! checking if there is a fix for this, or a timeline for when it will be completed? thanks!

cubiq commented 3 months ago

I've pushed a preliminary support for cosxl edit. Please let me know how it works. It's a bit temperamental and I feel like I need to do something to the middle conditioning before applying the ipadapter weights, but anyway there's a demo workflow in the examples directory.

I believe it works better with simple "style transfer" and PLUS model.

image

glyfspace commented 3 months ago

I just tried it out with the updated commit, and looks like it still has the same issue with DualCFG

Do you mind sharing how you got IPAdapater + DualCFG to work together? Thanks!

image image
glyfspace commented 3 months ago

Trying your example workflow with IPAdapter Plus also still has the same bug.

image image

Here is where things are tracing back to it looks like.

File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 131, in ipadapter_attention
ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 131, in
ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)

Thanks! Totally appreciate all the help :)

Flavoco commented 3 months ago

Hi!

I encountered the following “tuple index out of range” error while using the SamplerCustomAdvanced node in ComfyUI on Windows:

Error occurred when executing SamplerCustomAdvanced:

tuple index out of range

  File "S:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  ...
  File "S:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 131, in 
    ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
                      ~~~~~~~~~~~~~~~~~~^^^

The error occurs in the CrossAttentionPatch.py file, specifically in the ipadapter_attention function.

Please let me know if you need any additional details or if there’s anything else I can provide to assist in resolving this issue.

Thank you!

cubiq commented 3 months ago

Are you guys sure you are using cosxl edit? and not the standard model?

Flavoco commented 3 months ago

Yes, mine is the edit one...

cubiq commented 3 months ago

and that with the default included example? no modifications? Can I see a screen?

cubiq commented 3 months ago

I'm not sure, try to upgrade comfy maybe maybe you have an old environment

glyfspace commented 3 months ago

I have the same setup - tried updating my version of Comfy, but got the same error. Here is the full output.

`Error occurred when executing SamplerCustomAdvanced:

tuple index out of range

File "/Users/USER/Documents/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Users/USER/Documents/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Users/USER/Documents/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/Users/USER/Documents/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 529, in sample } File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 644, in sample def inner_set_conds(self, conds): File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 623, in inner_sample positive = conds["positive"] File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 534, in sample File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/Users/USER/Documents/ComfyUI/comfy/k_diffusion/sampling.py", line 580, in sample_dpmpp_2m olddenoised = None File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 272, in call uncond = None File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 610, in call conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed) File "/Users/USER/Documents/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 444, in predict_noise FUNCTION = "get_guider" File "/Users/USER/Documents/ComfyUI/comfy/samplers.py", line 218, in calc_cond_batch transformer_options["cond_or_uncond"] = cond_or_uncond[:] File "/Users/USER/Documents/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 63, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, args, kwargs) File "/Users/USER/Documents/ComfyUI/comfy/model_base.py", line 97, in apply_model extra = kwargs[o] File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 885, in forward else: File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/attention.py", line 633, in forward context = [context] len(self.transformer_blocks) File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/USER/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/Users/USER/Documents/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 253, in forward return func(self, x, context, transformer_options) File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/attention.py", line 460, in forward context_dim_attn2 = context_dim File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/diffusionmodules/util.py", line 191, in checkpoint return func(inputs) File "/Users/USER/Documents/ComfyUI/comfy/ldm/modules/attention.py", line 557, in _forward attn2_replace_patch = transformer_patches_replace.get("attn2", {}) File "/Users/USER/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 26, in call out = out + callback(out, q, k, v, extra_options, **self.kwargs[i]) File "/Users/USER/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 131, in ipadapter_attention ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) File "/Users/USER/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 131, in ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)`

glyfspace commented 3 months ago

looks like this is the same issue as #501

basically it doesn't work with SamplerCustomAdvanced

thank you!! let me know if I can provide any more information!

cubiq commented 3 months ago

image

it does work with SamplerCustomAdvanced otherwise I wouldn't have pushed it. There's something else here but I can't replicate so I don't know how to help here.

glyfspace commented 3 months ago

No worries! Will try to figure it out. If I do I will post here.

Thanks!

cubiq commented 3 months ago

if you can try to add the print statement to the CrossAttentionPatch.py file on line 126 and let me know the result from the terminal. That would help understand what is going on. The code should look like this:

    print(cond_or_uncond, k_cond.shape, k_uncond.shape)
    if len(cond_or_uncond) == 3: # TODO: conxl, I need to check this
        ip_k = torch.cat([(k_cond, k_uncond, k_cond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond, v_cond)[i] for i in cond_or_uncond], dim=0)
    else:
        ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond)[i] for i in cond_or_uncond], dim=0)

thanks

glyfspace commented 3 months ago

thanks!

here is what was printed

[2] torch.Size([4, 16, 1280]) torch.Size([4, 16, 1280])

cubiq commented 3 months ago

are you sure you have the latest comfyui installed?

glyfspace commented 3 months ago

Yeah I think so. It even has the latest cosxl_edit fix from comfy

image
cubiq commented 3 months ago

would you mind trying to upgrade from the batch script (not the manager)?

glyfspace commented 3 months ago

I'll try that out! I am on mac - not sure if that makes a difference.

cubiq commented 3 months ago

git pull then. But yeah that was an important piece of info.

glyfspace commented 3 months ago

"Already up to date." Looks like everything is updated. Sorry for leaving that out!

cubiq commented 3 months ago

I finally know what is happening. When there's not enough memory the embeds are not concatenated and sent instead separately. I Need to understand how to cope with that considering I don't have a Mac and I can't replicate I'm coding a little blind.

I would need someone that has this issue to assist me like in like a screencast meeting or remote deskop.

glyfspace commented 3 months ago

Happy to help you out with that!

So it should work on Windows? I can test on a Windows machine tomorrow to confirm.

cubiq commented 3 months ago

it doesn't work if comfy thinks that there's not enough vram, so I guess any low spec'd card. I can try to push the new code with that in mind but we need to confirm that is working.

You can try this code if you want (starting line 126

    if len(cond_or_uncond) == 3: # TODO: conxl, I need to check this
        ip_k = torch.cat([(k_cond, k_uncond, k_cond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond, v_cond)[i] for i in cond_or_uncond], dim=0)
    elif len(cond_or_uncond) == 1: # TODO: conxl, I need to check this
        ip_k = k_cond[0]
        ip_v = v_cond[0]
    else:
        ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond)[i] for i in cond_or_uncond], dim=0)
glyfspace commented 3 months ago

Still errors out.

image

Here is how I integrated it.

image

I can try on a higher-end card tomorrow to confirm that it works there! Thanks for all your help

cubiq commented 3 months ago

what is the full error back trace?

glyfspace commented 3 months ago
 File "/Users/rishi/Documents/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "/Users/rishi/Documents/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "/Users/rishi/Documents/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "/Users/rishi/Documents/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 552, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 683, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 662, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 567, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/comfy/k_diffusion/sampling.py", line 583, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 291, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 649, in __call__
    return self.predict_noise(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 466, in predict_noise
    out = comfy.samplers.calc_cond_batch(self.inner_model, [negative_cond, middle_cond, self.conds.get("positive", None)], x, timestep, model_options)
  File "/Users/rishi/Documents/ComfyUI/comfy/samplers.py", line 226, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet/adv_control/utils.py", line 63, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/comfy/model_base.py", line 103, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 887, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "/Users/rishi/Documents/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed
    x = layer(x, context, transformer_options)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/comfy/ldm/modules/attention.py", line 644, in forward
    x = block(x, context=context[i], transformer_options=transformer_options)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI-layerdiffuse/lib_layerdiffusion/attention_sharing.py", line 253, in forward
    return func(self, x, context, transformer_options)
  File "/Users/rishi/Documents/ComfyUI/comfy/ldm/modules/attention.py", line 568, in forward
    n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options)
  File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 26, in __call__
    out = out + callback(out, q, k, v, extra_options, **self.kwargs[i])
  File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 132, in ipadapter_attention
    ip_k = k_cond
  File "/Users/rishi/Documents/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/CrossAttentionPatch.py", line 132, in <listcomp>
    ip_k = k_cond
IndexError: tuple index out of range
cubiq commented 3 months ago

I know what is happening, if I find someone on my discord with this problem maybe I can make them do some real time tests

glyfspace commented 3 months ago

Great thanks! I can also help with screen share another night if needed!

I'll test tomorrow on a higher quality GPU also to confirm it works as expected there. Thanks again so much!

cubiq commented 3 months ago

if you join my discord we can try https://discord.com/invite/W2DhHkcjgn

Flavoco commented 3 months ago

I can confirm, on superior gpu it works. Thank you!

alisson-anjos commented 3 months ago

it doesn't work if comfy thinks that there's not enough vram, so I guess any low spec'd card. I can try to push the new code with that in mind but we need to confirm that is working.

You can try this code if you want (starting line 126

    if len(cond_or_uncond) == 3: # TODO: conxl, I need to check this
        ip_k = torch.cat([(k_cond, k_uncond, k_cond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond, v_cond)[i] for i in cond_or_uncond], dim=0)
    elif len(cond_or_uncond) == 1: # TODO: conxl, I need to check this
        ip_k = k_cond
        ip_v = v_cond
    else:
        ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
        ip_v = torch.cat([(v_cond, v_uncond)[i] for i in cond_or_uncond], dim=0)

the error was resolved but it does not generate the same result as without using the ipadapter, it seems that it no longer maintains the details of the original image.

cubiq commented 3 months ago

if anyone with this issue is willing to do a 1:1 session on discord I'm available

glyfspace commented 3 months ago

I am getting the same issue where the results are washed out and it is not preserving the original asset. However, that is on my cloud gpu (which I still need to update to the latest comfy), so I wasn't sure if that was caused by IP or not. I can help with a 1:1 session on discord now, but with my Macbook where things will run slow.

pendave commented 3 months ago

Hello guys, I met the same issue as " 0%| ........................ | 0/20 [00:00<?, ?it/s] !!! Exception during processing!!! tuple index out of range" DualCFGGuider node outputs a nonconsistency tuple?

And I tried to change "IPAdapter Advanced" to "IPAdapter" node, and it can go through sometimes. image

Also I tried to change "BasicScheduler" to "AlignYourStepsScheduler" node, and it can go through sometimes. image

sometimes means that it doesn't always succeed :(

=============================

I asked ChatGPT:

The error message you are encountering, "IndexError: tuple index out of range," suggests that there is an issue with accessing an element of a tuple or list that doesn't exist within the expected range. Specifically, this issue seems to arise from the ipadapter_attention function in the CrossAttentionPatch.py file.

Here is a more detailed analysis of the error:

  1. Error Location: The error occurs in the following line of CrossAttentionPatch.py:

    ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
  2. Error Cause: The list comprehension attempts to access elements of a tuple (k_cond, k_uncond) based on indices provided by cond_or_uncond. However, one of the indices in cond_or_uncond is out of the valid range (0 or 1 for a tuple of two elements). This causes an "IndexError" because the code is trying to access an index that doesn't exist in the tuple.

  3. Understanding the Context:

    • k_cond and k_uncond are likely tensors or other data structures being used in the attention mechanism.
    • cond_or_uncond is expected to be a list of indices that should be either 0 or 1, but it appears to contain an invalid index.
  4. Potential Causes and Solutions:

    • Incorrect Indices in cond_or_uncond: Ensure that cond_or_uncond only contains valid indices (0 or 1).
    • Initialization Issues: Check how cond_or_uncond is being initialized and populated. Ensure it is being set correctly.
    • Data Flow Issues: Trace the data flow to ensure cond_or_uncond is not being altered in an unexpected manner before it reaches the problematic line of code.

Steps to Debug and Fix:

  1. Print Statements: Add print statements before the line causing the error to check the contents of k_cond, k_uncond, and cond_or_uncond:

    print(f"k_cond: {k_cond}")
    print(f"k_uncond: {k_uncond}")
    print(f"cond_or_uncond: {cond_or_uncond}")
    ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)
  2. Validate Indices: Ensure that cond_or_uncond contains only valid indices (0 or 1). You might add a check before the concatenation:

    for i in cond_or_uncond:
       if i not in [0, 1]:
           raise ValueError(f"Invalid index {i} in cond_or_uncond")
  3. Review Data Flow: Trace the assignment and modification of cond_or_uncond throughout the code to ensure it is being handled correctly.

By following these steps, you should be able to identify and correct the root cause of the "IndexError: tuple index out of range" in your script.

=============== And Claude:

Sure, here's the explanation in English:

Based on the error message, this exception does not seem to be a bug in DualCFGGuider itself, but rather an index out of range error that occurred during some operations related to the attention mechanism.

Specifically, the error occurred at line 131 of the CrossAttentionPatch.py file:

ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0)

This line of code attempts to select the corresponding parts from the k_cond and k_uncond tensors based on the indices in the cond_or_uncond list, and then concatenate them. However, the cond_or_uncond list may contain some out-of-range indices, resulting in the IndexError: tuple index out of range exception.

The root cause of this error may lie in the computation process of the attention mechanism, possibly due to issues with the format or dimensions of the input data, leading to the index going out of bounds. To resolve this issue, you can carefully check if the shape of the input data matches the model's expectations, or you can try debugging the relevant code to determine which step in the attention mechanism computation is causing the error.