Open thoughtfission opened 7 months ago
Similar issue here!
Can you share more details about your setup? I assume you have to install a node (?) after setting up ComfyUI?
Hi,
I responded by email with more detail. Did you get it?
Thanks, G
I did not -- did you email it to vik@moondream.ai?
I'll try again
Hi,
Hoping someone here can help me with the following error:
Error occurred when executing SUPIR_Upscale:
No operator found for
memory_efficient_attention_forward
with inputs: query : shape=(1, 2970, 1, 512) (torch.bfloat16) key : shape=(1, 2970, 1, 512) (torch.bfloat16) value : shape=(1, 2970, 1, 512) (torch.bfloat16) attn_bias : p : 0.0decoderF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support attn_bias type is operator wasn't built - seepython -m xformers.info
for more infoflshattF@0.0.0
is not supported because: max(query.shape[-1] != value.shape[-1]) > 256 xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more infotritonflashattF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more info triton is not availablecutlassF
is not supported because: xFormers wasn't build with CUDA support operator wasn't built - seepython -m xformers.info
for more infosmallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.bfloat16 (supported: {torch.float32}) operator wasn't built - seepython -m xformers.info
for more info unsupported embed per head: 512File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\nodes.py", line 333, in process samples = self.model.batchify_sample(imgs, caps, num_steps=steps, File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 121, in batchify_sample _z = self.encode_first_stage_with_denoise(x, use_sample=False) File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\models\SUPIR_model.py", line 48, in encode_first_stage_with_denoise h = self.first_stage_model.denoise_encoder(x) File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 704, in call return self.vae_tile_forward(x) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 587, in wrapper ret = fn(args, kwargs) File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 937, in vae_tile_forward tile = task1 File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 372, in ('attn', lambda x, net=net: xformer_attn_forward(net, x))) File "C:\Users\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR\SUPIR\utils\tilevae.py", line 340, in xformer_attn_forward out = xformers.ops.memory_efficient_attention( File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha__init.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\init.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\init__.py", line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\dispatch.py", line 120, in _dispatch_fw return _run_priority_list( File "C:\Users\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg)