balazik / ComfyUI-PuLID-Flux

PuLID-Flux ComfyUI implementation
Apache License 2.0
71 stars 4 forks source link

CUDA ERROR #11

Open abozahran opened 6 hours ago

abozahran commented 6 hours ago

got prompt !!! Exception during processing !!! No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 577, 16, 64) (torch.bfloat16) key : shape=(1, 577, 16, 64) (torch.bfloat16) value : shape=(1, 577, 16, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: xFormers wasn't build with CUDA support dtype=torch.bfloat16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 has custom scale Operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 64 Traceback (most recent call last): File "C:\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) File "C:\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "C:\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) File "C:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux\pulidflux.py", line 342, in apply_pulid_flux id_cond_vit, id_vit_hidden = eva_clip(face_features_image, return_all_features=False, return_hidden=True, shuffle=False) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(args, kwargs) File "C:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux\eva_clip\eva_vit_model.py", line 544, in forward x, hidden_states = self.forward_features(x, return_all_features, return_hidden, shuffle) File "C:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux\eva_clip\eva_vit_model.py", line 531, in forward_features x = blk(x, rel_pos_bias=rel_pos_bias) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, *kwargs) File "C:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux\eva_clip\eva_vit_model.py", line 293, in forward x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias, attn_mask=attn_mask)) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(args, kwargs) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "C:\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux\eva_clip\eva_vit_model.py", line 208, in forward x = xops.memory_efficient_attention( File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\xformers\ops\fmha__init.py", line 192, in memory_efficient_attention return _memory_efficient_attention( File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\xformers\ops\fmha__init__.py", line 290, in _memory_efficient_attention return _memory_efficient_attention_forward( File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\xformers\ops\fmha\init__.py", line 306, in _memory_efficient_attention_forward op = _dispatch_fw(inp) File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 94, in _dispatch_fw return _run_priority_list( File "c:\users\zahran\appdata\local\programs\python\python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 69, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 577, 16, 64) (torch.bfloat16) key : shape=(1, 577, 16, 64) (torch.bfloat16) value : shape=(1, 577, 16, 64) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 flshattF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info triton is not available requires A100 GPU cutlassF is not supported because: xFormers wasn't build with CUDA support Operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: xFormers wasn't build with CUDA support dtype=torch.bfloat16 (supported: {torch.float32}) max(query.shape[-1] != value.shape[-1]) > 32 has custom scale Operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 64

balazik commented 5 hours ago

This issue is not related to ComfyUI-PuLID-Flux. It seem that you have problems with wrong version of xformers in python. xFormers wasn't build with CUDA support => you need to install the correct xformers for your version of CUDA !

Here is how to do it: https://github.com/facebookresearch/xformers/blob/main/README.md

Let me know how it went.