kijai / ComfyUI-MochiWrapper

Apache License 2.0
427 stars 30 forks source link

Long Prompts (256 Token limit) #34

Closed MushroomFleet closed 3 days ago

MushroomFleet commented 3 days ago

I'm generating prompts with Vision, and found that there is a 256 token limit, I've adjusted my Vision Prompt for use with i2v, however thought it worth reporting:

!!! Exception during processing !!! Prompt is too long, max tokens supported is 256 or less, got 512
Traceback (most recent call last):
  File "I:\MACHINES3\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\MACHINES3\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\MACHINES3\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "I:\MACHINES3\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\MACHINES3\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MochiWrapper\nodes.py", line 278, in process
    raise ValueError(f"Prompt is too long, max tokens supported is {max_tokens} or less, got {embeds.shape[1]}")
ValueError: Prompt is too long, max tokens supported is 256 or less, got 512

I know there are ways to split and concatenate long prompts, but that is above my pay grade :) I will try using prompt scheduling next to see if we can alter the prompt as we go through step count next.

again not sure if this is Mochi itself, but worth the report maybe :)

Thanks again !

kijai commented 3 days ago

Mochi uses max_length 256 so that's expected, I know nothing about how to handle that best, auto truncating is an option, but personally I feel like error is best to let the user know their prompt would be truncated. For automated solutions the prompt generation should be limited and/or the outputs from the generators truncated.

MushroomFleet commented 3 days ago

Big Thanks !!