Open KillyTheNetTerminal opened 2 months ago
oh my god thanks it worked!
Exactly the same workflow with the same model but this is the output, something I missing?
It looks like using --fp32-text-enc
affects the results. refer to
The results on my machine are similar to yours.
It looks like using
--fp32-text-enc
affects the results. refer toThe results on my machine are similar to yours.
![]()
The effect is somewhat different on some GPU models that cannot run fp16. This may be something we need to pay attention to in the future. cc @budui
there is a way to solve this? RX580 can't use fp16?
Error occurred when executing T5TextEncode #ELLA:
"addmm_implcpu" not implemented for 'Half'
File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\ella.py", line 228, in encode cond = text_encoder_model(text, max_length=max_length) File "C:\Users\WarMa\OneDrive\Escritorio\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ELLA\model.py", line 158, in call outputs = self.model(text_input_ids, attention_mask=attention_mask) # type: ignore File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1980, in forward encoder_outputs = self.encoder( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 1115, in forward layer_outputs = layer_module( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 695, in forward self_attention_outputs = self.layer[0]( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 602, in forward attention_output = self.SelfAttention( File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\transformers\models\t5\modeling_t5.py", line 521, in forward query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\WarMa\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias)