TencentQQGYLab / ComfyUI-ELLA

ELLA nodes for ComfyUI
347 stars 18 forks source link

Error occurred when executing T5TextEncode #ELLA: (RX580 i39100f Windows11 32gb Ram) #42

Open KillyTheNetTerminal opened 6 months ago

KillyTheNetTerminal commented 6 months ago

Error occurred when executing T5TextEncode #ELLA:

"addmm_implcpu" not implemented for 'Half'

File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\ella.py", line 228, in encode cond = text_encoder_model(text, max_length=max_length) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\model.py", line 158, in call outputs = self.model(text_input_ids, attention_mask=attention_mask) # type: ignore File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1980, in forward encoder_outputs = self.encoder( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1115, in forward layer_outputs = layer_module( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 695, in forward self_attention_outputs = self.layer[0]( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 602, in forward attention_output = self.SelfAttention( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 521, in forward query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)

JettHu commented 6 months ago

use --fp32-text-enc refer to

KillyTheNetTerminal commented 6 months ago

oh my god thanks it worked!

KillyTheNetTerminal commented 6 months ago

imagen_2024-05-08_120449476 Exactly the same workflow with the same model but this is the output, something I missing? imagen_2024-05-08_120523659

JettHu commented 6 months ago

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image
JettHu commented 6 months ago

It looks like using --fp32-text-enc affects the results. refer to

The results on my machine are similar to yours.

image

The effect is somewhat different on some GPU models that cannot run fp16. This may be something we need to pay attention to in the future. cc @budui

KillyTheNetTerminal commented 6 months ago

there is a way to solve this? RX580 can't use fp16?