yolain / ComfyUI-Easy-Use

In order to make it easier to use the ComfyUI, I have made some optimizations and integrations to some commonly used nodes.
GNU General Public License v3.0
1.06k stars 72 forks source link

[bug] Flux [ERROR] model or clip is missing #378

Open Milor123 opened 2 months ago

Milor123 commented 2 months ago

Hi guys, I actually have the same error that https://github.com/yolain/ComfyUI-Easy-Use/issues/306

image

image

I've tried use all my models: image

image

I have my comfyui updated also the your addons.

Prompt executed in 17.24 seconds
No pending upload
got prompt
Executed {'node': '44', 'display_node': '44', 'output': {'images': [{'filename': 'PB-_temp_qbtdu_00001_.png', 'subfolder': 'PreviewBridge', 'type': 'temp'}]}, 'prompt_id': '0c34198f-4f49-4971-861a-0caf2a48f29f'}
Executed {'node': '166', 'display_node': '166', 'output': {'images': [{'filename': 'ComfyUI_temp_qovil_00001_.png', 'subfolder': '', 'type': 'temp'}]}, 'prompt_id': '0c34198f-4f49-4971-861a-0caf2a48f29f'}
clip missing: ['text_projection.weight']
[EasyUse] 正在加载模型...
!!! Exception during processing !!! [ERROR] model or clip is missing
Traceback (most recent call last):
  File "/home/noe/Documentos/ComfyUI/execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/noe/Documentos/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 1972, in fluxloader
    return super().adv_pipeloader(ckpt_name, 'Default', vae_name, 0,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 924, in adv_pipeloader
    model, clip, vae, clip_vision, lora_stack = easyCache.load_main(ckpt_name, config_name, vae_name, lora_name, lora_model_strength, lora_clip_strength, optional_lora_stack, model_override, clip_override, vae_override, prompt)
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/libs/loader.py", line 455, in load_main
    raise Exception(f"[ERROR] model or clip is missing")
Exception: [ERROR] model or clip is missing

Prompt executed in 11.11 seconds
No pending upload

in other cases shows it:

Prompt executed in 16.93 seconds
No pending upload
got prompt
Executed {'node': '44', 'display_node': '44', 'output': {'images': [{'filename': 'PB-_temp_qbtdu_00001_.png', 'subfolder': 'PreviewBridge', 'type': 'temp'}]}, 'prompt_id': '0c01a769-c98f-42dd-9e4e-00f791c25ef7'}
Executed {'node': '166', 'display_node': '166', 'output': {'images': [{'filename': 'ComfyUI_temp_qovil_00001_.png', 'subfolder': '', 'type': 'temp'}]}, 'prompt_id': '0c01a769-c98f-42dd-9e4e-00f791c25ef7'}
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
[EasyUse] 正在加载模型...
[EasyUse] 正在进行正面提示词...
[EasyUse] 正在进行负面提示词...
!!! Exception during processing !!! tuple index out of range
Traceback (most recent call last):
  File "/home/noe/Documentos/ComfyUI/execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/noe/Documentos/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 1972, in fluxloader
    return super().adv_pipeloader(ckpt_name, 'Default', vae_name, 0,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/noe/Documentos/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 938, in adv_pipeloader
    positive_embeddings_final, negative_embeddings_final = easyControlnet().apply(controlnet[0], controlnet[5], positive_embeddings_final, negative_embeddings_final, controlnet[1], start_percent=controlnet[2], end_percent=controlnet[3], control_net=None, scale_soft_weights=controlnet[4], mask=None, easyCache=easyCache, use_cache=True, model=model, vae=vae)
                                                                                                 ~~~~~~~~~~^^^
IndexError: tuple index out of range

what should i do guys?

GPU-server commented 1 month ago

What was the solution?

yolain commented 1 month ago

What was the solution?

The issue is that the "Load Diffusion model" node is not enable. Requires connection of model_override, vae_override and clip_override.