comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
52.34k stars 5.52k forks source link

Error occurred when executing CLIPTextEncode in sd3_medium_incl_clips_t5xxlfp8 #3725

Open Imadbein46 opened 3 months ago

Imadbein46 commented 3 months ago

i get these errors in sd3_medium_incl_clips_t5xxlfp8

workflow-stable-diffusion-3-simple-workflow

gpu : rtx 3060 12gb ram : 64gb

Python version: 3.11.6 pytorch version: 2.3.1+cu121 safetensors 0.4.3

Error occurred when executing CLIPTextEncode:

"index_select_cuda" not implemented for 'Float8_e4m3fn'

File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 58, in encode cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 142, in encode_from_tokens cond, pooled = self.cond_stage_model.encode_token_weights(tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd3_clip.py", line 124, in encode_token_weights t5_out, t5_pooled = self.t5xxl.encode_token_weights(token_weight_pars_t5) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 40, in encode_token_weights out, pooled = self.encode(to_encode) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 201, in encode return self(tokens) ^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 180, in forward outputs = self.transformer(tokens, attention_mask, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\t5.py", line 230, in forward x = self.shared(input_ids) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward return F.embedding( ^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 2264, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "index_select_cuda" not implemented for 'Float8_e4m3fn'

JorgeR81 commented 3 months ago

Solution here: ( Just updating safetensors, worked for me ) https://github.com/comfyanonymous/ComfyUI/issues/3693#issuecomment-2163475040

Imadbein46 commented 3 months ago

Solution here: ( Just updating safetensors, worked for me ) #3693 (comment)

is already updated to 0.4.3

JorgeR81 commented 3 months ago

is already updated to 0.4.3

Yes, I'm also at 0.4.3 It seems you have a different issue ...

liusida commented 3 months ago

It seems that you are using the SD3 model with T5 XXL included. (10.9 GB, right?)

However, there's no need to include the huge T5 XXL into SD3 model. I suggest to use the other file: https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors , which is the SD3 model itself.

The official workflow example use sd3_medium.safetensors. image

Also, you can try my custom version, the blue colored nodes: image

Link is here: https://github.com/liusida/ComfyUI-SD3-nodes

Imadbein46 commented 3 months ago

It seems that you are using the SD3 model with T5 XXL included. (10.9 GB, right?)

However, there's no need to include the huge T5 XXL into SD3 model. I suggest to use the other file: https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium.safetensors , which is the SD3 model itself.

The official workflow example use sd3_medium.safetensors. image

Also, you can try my custom version, the blue colored nodes: image

Link is here: https://github.com/liusida/ComfyUI-SD3-nodes

hi yes i use sd3 with t5xxl but i tried out your nodes did fix the issues and wen i use DualCLIPLoader sd3 work and if i use TripleCLIPLoader i get errors in clip t5xxl so i download i clean comfyui and without custom_nodes it same to work with sd3_medium_incl_clips_t5xxlfp8