patientx / ComfyUI-Zluda

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU performance.
GNU General Public License v3.0
162 stars 11 forks source link

the workflow stopped at Clip Text Encode (Prompt) #43

Open sansan333111 opened 1 day ago

sansan333111 commented 1 day ago

Your question

I followed Setup and entered ComfyUI. But when I run the basic workflow, it always stopped at Clip Text Encode (Prompt).

Logs

To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.float16, manual cast: None
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
loaded completely 0.0 1560.802734375 True
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
loaded completely 0.0 4897.0483474731445 True
loaded completely 5748.985554504395 1560.802734375 True
!!! Exception during processing !!! CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`
Traceback (most recent call last):
  File "d:\ComfyUI-Zluda\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ComfyUI-Zluda\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ComfyUI-Zluda\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ComfyUI-Zluda\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "d:\ComfyUI-Zluda\nodes.py", line 65, in encode
    output = clip.encode_from_tokens(tokens, return_pooled=True, return_dict=True)
  File "d:\ComfyUI-Zluda\comfy\sd.py", line 133, in encode_from_tokens
    o = self.cond_stage_model.encode_token_weights(tokens)
  File "d:\ComfyUI-Zluda\comfy\sdxl_clip.py", line 60, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "d:\ComfyUI-Zluda\comfy\sd1_clip.py", line 41, in encode_token_weights
    o = self.encode(to_encode)
  File "d:\ComfyUI-Zluda\comfy\sd1_clip.py", line 238, in encode
    return self(tokens)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\sd1_clip.py", line 210, in forward
    outputs = self.transformer(tokens, attention_mask_model, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state, dtype=torch.float32)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\clip_model.py", line 137, in forward
    x = self.text_model(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\clip_model.py", line 113, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\clip_model.py", line 70, in forward
    x = l(x, mask, optimized_attention)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\clip_model.py", line 51, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\clip_model.py", line 17, in forward
    q = self.q_proj(x)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\ComfyUI-Zluda\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\ops.py", line 68, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "d:\ComfyUI-Zluda\comfy\ops.py", line 64, in forward_comfy_cast_weights
    return torch.nn.functional.linear(input, weight, bias)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasLtMatmulAlgoGetHeuristic( ltHandle, computeDesc.descriptor(), Adesc.descriptor(), Bdesc.descriptor(), Cdesc.descriptor(), Cdesc.descriptor(), preference.descriptor(), 1, &heuristicResult, &returnedResult)`

Prompt executed in 5.72 seconds
fatal: not a git repository (or any of the parent directories): .git
Failed to get ComfyUI version: Command '['git', 'describe', '--tags']' returned non-zero exit status 128.

Other

No response

patientx commented 1 day ago

What is your gpu ? It seems like a cuda problem. It is either a wrong/incomplete zluda install or a custom node installed a node that changed the installed torch and / or other required packages.