Open jokero3answer opened 2 weeks ago
Diagnostics-1718251462.log
这个报错主要来源于在CrossAttentionPatch.py文件中ipadapter_attention函数内部。具体的错误是tuple index out of range,即元组索引超出范围。这个错误意味着你试图访问元组中不存在的元素。
根据错误信息,问题出现在以下代码片段:
python 复制代码 ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) 这个错误通常发生在cond_or_uncond中存在索引值不在元组范围内(即不是0或1)的元素。
以下是一些可能的解决方案:
检查 cond_or_uncond 的值: 确保 cond_or_uncond 只包含0和1。
python 复制代码 print(cond_or_uncond) # 检查值 调试 cond_or_uncond 的来源: 找出cond_or_uncond是如何生成的,确保其正确性。
添加索引检查: 在访问元组元素前,添加索引检查。
python 复制代码 ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond if i in [0, 1]], dim=0) 具体的解决方案需要根据你的实际代码逻辑进行调整,但一般来说,上述步骤应该能够帮助你定位并修复这个错误。
@cubiq @IDGallagher @madriss
I had similar error when executing the cosxl example (https://github.com/cubiq/ComfyUI_IPAdapter_plus/blob/main/examples/ipadapter_cosxl_edit.json)
Attached is the log cosxl_error_log.txt
执行 cosxl 示例时我也遇到了类似的错误(https://github.com/cubiq/ComfyUI_IPAdapter_plus/blob/main/examples/ipadapter_cosxl_edit.json)
附件是日志 cosxl_error_log.txt
What's the reason for this, although the original sampler works successfully, it doesn't work as well as the custom one!
I did not change anything for the cosxl example, loaded the workflow into ComfyUI, queued the prompt and had the error 'tuple index out of range', I updated my ComfyUI and custom nodes a few minutes ago, let me know if you need more details to help debug my issue. Thanks
更新自定义节点就好了,我前面也是碰到这样的问题
更新习惯节点就好了,我前面也是上面的这样的问题
All my nodes are up to date.
我没有对 cosxl 示例进行任何更改,将工作流加载到 ComfyUI 中,排队提示并出现错误“元组索引超出范围”,几分钟前我更新了我的 ComfyUI 和自定义节点,如果您需要更多详细信息来帮助调试我的问题,请告诉我。谢谢 The original sampler is working
it's a problem with low spec GPUs. When you don't have enough VRAM the embeds are split instead of being sent altogether and I don't consider that situation at this time. it's a relatively easy fix
Okay, thanks very much.,It's actually playable with other samplers
That is interesting, my workflow finally worked. The issue is with 'Repeat Latent Batch', the workflow worked with 3 batch, but started to have error with 4 batch.
My GPU is RTX 4080 16G, VRAM runs at around 60% when executing the workflow, does not seem to be low spec GPUs related issue. But anyway, thanks, at least I can have my workflow running.
depends how comfy feels in the moment. if the embeds are split it means that it doesn't have enough vram. you can try the high vram option that will force everything into the gpu
I just changed the mode to high vram, and VRAM runs at around 60-70%, but I still have the error with 4 'Repeat Latent Batch'.
I am not really bothered with this error, since the workflow can give me at least 3 outputs, thanks for looking into this matter.
it stays at 60-70 because it would overflow with all the embeds otherwise