Closed gavinliukaiber closed 10 months ago
actually we can probably use
clip.cond_stage_model.clip_l.special_tokens
which has the index for the padding token:
{'start': 49406, 'end': 49407, 'pad': 49407}
A separate thing --- this line silently truncates the longer prompt, which also seem incorrect?
I've made some changes yesterday with your suggestion. The error is no longer present but I'll keep this open while people try it out and if there is any issues, they can report here and I'll have a look at it. Sorry it's been a while looking into this, it's been a busy week.
Hey there! Went through several github issues here, and must say, the issue is still there. Updated both comfy and custom nodes, everything's latest. ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\ComfyUI_BLYAT\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_BLYAT\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_BLYAT\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_BLYAT\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\ScheduledNodes.py", line 126, in animate pc = BatchPoolAnimConditioning( pos_cur_prompt, pos_nxt_prompt, weight, clip,) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_BLYAT\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes\BatchFuncs.py", line 208, in BatchPoolAnimConditioning final_conditioning = torch.cat(cond_out, dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 25 in the list. (prompt not of my own, so hid it) Although it went through when there were 9-10 frame nubmers total, not sure what was the exact number.
Yes! I ran into that problem too! A problem without a cause.
I am encounter the same issue here. I notice that when I modify the prompt by removing repeating vocabularies between pre/app text and schedule prompt can sometime resolve the error.
this issue was solved in the latest commit. Thanks for your patience
These 2 lines pads zero as the token embedding when the second prompt is longer.
However, when the embedding for the token output by clip is non-zero, as shown by the sum of its 786 dimensions being around -86 (the following output was obtained by turning debugger on at the above lines) :
Therefore this padding seems incorrect. I think the correct way should be padding the original string with token used in CLIP, but I am not sure how to do that. thoughts?