comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
52.73k stars 5.57k forks source link

ZLUDA support #2810

Closed LeagueRaINi closed 7 months ago

LeagueRaINi commented 7 months ago

any chance we will be seeing zluda support for comfy? automatic runs fine for the most part but its not as nice as comfy to work with

so far when forking the repo and applying the same steps as for automatic https://github.com/vladmandic/automatic/wiki/ZLUDA

running it with --disable-cuda-malloc crashes the driver, running it with --disable-cuda-malloc --use-quad-cross-attention gets further but errors out when when sampling raw

LeagueRaINi commented 7 months ago

Ok so i got comfy working, havent tested much beside the basic node layout but here is what i did

here is the code i added to comfy/model_management.py, the torch.backends.cuda.enable_ ones are copied from what vlad does

@@ -194,11 +194,10 @@ if args.fp16_vae:
 elif args.bf16_vae:
     VAE_DTYPE = torch.bfloat16
 elif args.fp32_vae:
     VAE_DTYPE = torch.float32

-
 if ENABLE_PYTORCH_ATTENTION:
     torch.backends.cuda.enable_math_sdp(True)
     torch.backends.cuda.enable_flash_sdp(True)
     torch.backends.cuda.enable_mem_efficient_sdp(True)

@@ -222,11 +221,10 @@ if args.force_fp16:

 if lowvram_available:
     if set_vram_to in (VRAMState.LOW_VRAM, VRAMState.NO_VRAM):
         vram_state = set_vram_to

-
 if cpu_state != CPUState.GPU:
     vram_state = VRAMState.DISABLED

 if cpu_state == CPUState.MPS:
     vram_state = VRAMState.SHARED
@@ -252,11 +250,28 @@ def get_torch_device_name(device):
         return "{} {}".format(device, torch.xpu.get_device_name(device))
     else:
         return "CUDA {}: {}".format(device, torch.cuda.get_device_name(device))

 try:
-    print("Device:", get_torch_device_name(get_torch_device()))
+    torch_device_name = get_torch_device_name(get_torch_device())
+
+    if "[ZLUDA]" in torch_device_name:
+        print("Detected ZLUDA, this is experimental and may not work properly.")
+
+        if torch.backends.cudnn.enabled:
+            torch.backends.cudnn.enabled = False
+            print("cuDNN is disabled because ZLUDA does currently not support it.")
+
+        torch.backends.cuda.enable_flash_sdp(True)
+        torch.backends.cuda.enable_math_sdp(False)
+        torch.backends.cuda.enable_mem_efficient_sdp(False)
+
+        if ENABLE_PYTORCH_ATTENTION:
+            print("Disabling pytorch cross attention because it's not supported by ZLUDA.")
+            ENABLE_PYTORCH_ATTENTION = False
+
+    print("Device:", torch_device_name)
 except:
     print("Could not pick default device.")

 print("VAE dtype:", VAE_DTYPE)

note that it is still required to run comfy with --disable-cuda-malloc, a simple check for non nvidia cards here is probably enough to not need the arg?

@@ -48,11 +48,13 @@ def cuda_malloc_supported():
     try:
         names = get_gpu_names()
     except:
         names = set()
     for x in names:
-        if "NVIDIA" in x:
+        if "AMD" in x:
+            return False
+        elif "NVIDIA" in x:
             for b in blacklist:
                 if b in x:
                     return False
     return True

raw

Andyholm commented 7 months ago

Going to try this out on my 7900xtx, I'll report back after installing.

Andyholm commented 7 months ago

Think I did everything, but I'm getting this error when launching:


  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\main.py", line 76, in <module>
    import execution
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd.py", line 4, in <module>
    from comfy import model_management
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 118, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 87, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 787, in current_device
    _lazy_init()
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx```
CraftMaster163 commented 7 months ago

@Andyholm did you find the files they were talking about? if so where are they or how get them

Andyholm commented 7 months ago

@CraftMaster163 Yup! Not in the link provided, but here: https://github.com/lshqqytiger/ZLUDA/releases/tag/v3.2-win

CraftMaster163 commented 7 months ago

i see the files to replace were edited out what were they, also i get this error when launching with zluda image

LeagueRaINi commented 7 months ago

i see the files to replace were edited out what were they, also i get this error when launching with zluda image

they where edited out cause its described on the linked wiki entry for automatic anyways.

Andyholm commented 7 months ago

Think I did everything, but I'm getting this error when launching:

  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\main.py", line 76, in <module>
    import execution
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd.py", line 4, in <module>
    from comfy import model_management
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 118, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 87, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 787, in current_device
    _lazy_init()
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx```

The reason for my error was because I didn't install HIP sdk. I only did what OP said in the original post, but it has successfully launched now. :))

Andyholm commented 7 months ago

Got this now when trying to generate an image:



shape '[77, -1, 77, 77]' is invalid for input of size 5929

  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\nodes.py", line 56, in encode
    cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd.py", line 131, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd1_clip.py", line 514, in encode_token_weights
    out, pooled = getattr(self, self.clip).encode_token_weights(token_weight_pairs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd1_clip.py", line 39, in encode_token_weights
    out, pooled = self.encode(to_encode)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd1_clip.py", line 190, in encode
    return self(tokens)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd1_clip.py", line 172, in forward
    outputs = self.transformer(tokens, attention_mask, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\clip_model.py", line 131, in forward
    return self.text_model(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\clip_model.py", line 109, in forward
    x, i = self.encoder(x, mask=mask, intermediate_output=intermediate_output)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\clip_model.py", line 68, in forward
    x = l(x, mask, optimized_attention)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\clip_model.py", line 49, in forward
    x += self.self_attn(self.layer_norm1(x), mask, optimized_attention)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\clip_model.py", line 20, in forward
    out = optimized_attention(q, k, v, self.heads, mask)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\ldm\modules\attention.py", line 117, in attention_basic
    mask = mask.reshape(mask.shape[0], -1, mask.shape[-2], mask.shape[-1]).expand(-1, heads, -1, -1).reshape(sim.shape)```
LeagueRaINi commented 7 months ago

Think I did everything, but I'm getting this error when launching:

  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\main.py", line 76, in <module>
    import execution
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\nodes.py", line 20, in <module>
    import comfy.diffusers_load
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\sd.py", line 4, in <module>
    from comfy import model_management
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 118, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\comfy\model_management.py", line 87, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 787, in current_device
    _lazy_init()
  File "C:\Users\andre\Desktop\ZLUDA COMFY\ComfyUI\venv\lib\site-packages\torch\cuda\__init__.py", line 302, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx```

The reason for my error was because I didn't install HIP sdk. I only did what OP said in the original post, but it has successfully launched now. :))

vlads zluda wiki entry has been linked before the changes as well, given how it works i was expecting that the bare minimum is already setup but thats why i kept only the link to vlads setup now 😉

Andyholm commented 7 months ago

I thought you were just stating your sources, lol

CraftMaster163 commented 7 months ago

anyone know why on a 7800xt i get the error 215 on HIP SDK installer and how to fix it?

CraftMaster163 commented 7 months ago

anyone know why i get this error, i did what the guide said and edited the file in the issue to do the same thing got prompt model_type STABLE_CASCADE adm 0 Requested to load SDXLRefinerClipModel Loading 1 new model ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 56, in encode cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 131, in encode_from_tokens cond, pooled = self.cond_stage_model.encode_token_weights(tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 514, in encode_token_weights out, pooled = getattr(self, self.clip).encode_token_weights(token_weight_pairs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 39, in encode_token_weights out, pooled = self.encode(to_encode) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 190, in encode return self(tokens) ^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd1_clip.py", line 172, in forward outputs = self.transformer(tokens, attention_mask, intermediate_output=self.layer_idx, final_layer_norm_intermediate=self.layer_norm_hidden_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 131, in forward return self.text_model(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 97, in forward x = self.embeddings(input_tokens) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\ComfyUI\comfy\clip_model.py", line 80, in forward return self.token_embedding(input_tokens) + self.position_embedding.weight ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward return F.embedding( ^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\functional.py", line 2237, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: operation not supported CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Prompt executed in 3.26 seconds i also added the flag too

Andyholm commented 7 months ago

@CraftMaster163 Did you install HIP SDK?

CraftMaster163 commented 7 months ago

i did, i think the issue was i had the wrong cuda compiled torch, seems to be working now

CraftMaster163 commented 7 months ago

now when it loads a model and tries to use it it just crashes my system

Andyholm commented 7 months ago

Loading went fine for me, just took a really long time the first time. But generating doesn't work, then I get the error I posted last.

CraftMaster163 commented 7 months ago

before my system freezes i see it says torch not compiled with flash attention, could that be why it freezes?

LeagueRaINi commented 7 months ago

could u guys keep that stuff out of this issue, improper installation isnt something that needs to be discussed here, just spams everyones inbox for no reason if you dont know how to apply this temporary fix wait till comfy gets official support for zluda

Andyholm commented 7 months ago

My installation isn't improper. I followed this https://github.com/vladmandic/automatic/wiki/ZLUDA like you said, and made the edits you did, but I still get an error. Since the error is relevant to this issue, I've posted it here.

GUUser91 commented 7 months ago

@CraftMaster163 Here's how to fix the error 215 problem https://github.com/ROCm/ROCm/issues/2363#issuecomment-1805043942

CraftMaster163 commented 7 months ago

@Andyholm any luck fixing the sampling issue? im having similar issue to where cudnn has an internal error

CraftMaster163 commented 7 months ago

@Andyholm if you getting cudnn error add torch.backends.cudnn.enabled = False to the sampling file or somewhere torch is defined

brknsoul commented 7 months ago

@LeagueRaINi I imagine just cloning your fork is an easy way of getting started? ;-) Rather than having to make the manual adjustments to files, or waiting until the PR is merged.

Got it running, but oddly enough dml is faster than zluda in comfyui.. perhaps because it's not using dynamic BMM, but subquad cross attn method.

Andyholm commented 7 months ago

@CraftMaster163 Nope, but I tested ZLUDA on vladmandic/automatic and didn't notice any speed improvements over directml. I might be doing something wrong though, but idk.

uuzp commented 7 months ago

好吧,我工作得很舒服,除了基本节点布局之外还没有进行太多测试,但这就是我所做的

这是我添加到comfy/model_management.py 的代码,这些torch.backends.cuda.enable_代码是从 vlad 所做的复制的

@@ -194,11 +194,10 @@ if args.fp16_vae:
 elif args.bf16_vae:
     VAE_DTYPE = torch.bfloat16
 elif args.fp32_vae:
     VAE_DTYPE = torch.float32

-
 if ENABLE_PYTORCH_ATTENTION:
     torch.backends.cuda.enable_math_sdp(True)
     torch.backends.cuda.enable_flash_sdp(True)
     torch.backends.cuda.enable_mem_efficient_sdp(True)

@@ -222,11 +221,10 @@ if args.force_fp16:

 if lowvram_available:
     if set_vram_to in (VRAMState.LOW_VRAM, VRAMState.NO_VRAM):
         vram_state = set_vram_to

-
 if cpu_state != CPUState.GPU:
     vram_state = VRAMState.DISABLED

 if cpu_state == CPUState.MPS:
     vram_state = VRAMState.SHARED
@@ -252,11 +250,28 @@ def get_torch_device_name(device):
         return "{} {}".format(device, torch.xpu.get_device_name(device))
     else:
         return "CUDA {}: {}".format(device, torch.cuda.get_device_name(device))

 try:
-    print("Device:", get_torch_device_name(get_torch_device()))
+    torch_device_name = get_torch_device_name(get_torch_device())
+
+    if "[ZLUDA]" in torch_device_name:
+        print("Detected ZLUDA, this is experimental and may not work properly.")
+
+        if torch.backends.cudnn.enabled:
+            torch.backends.cudnn.enabled = False
+            print("cuDNN is disabled because ZLUDA does currently not support it.")
+
+        torch.backends.cuda.enable_flash_sdp(True)
+        torch.backends.cuda.enable_math_sdp(False)
+        torch.backends.cuda.enable_mem_efficient_sdp(False)
+
+        if ENABLE_PYTORCH_ATTENTION:
+            print("Disabling pytorch cross attention because it's not supported by ZLUDA.")
+            ENABLE_PYTORCH_ATTENTION = False
+
+    print("Device:", torch_device_name)
 except:
     print("Could not pick default device.")

 print("VAE dtype:", VAE_DTYPE)

请注意,仍然需要运行舒适,这里--disable-cuda-malloc对非 nvidia 卡进行简单检查可能足以不需要 arg?

@@ -48,11 +48,13 @@ def cuda_malloc_supported():
     try:
         names = get_gpu_names()
     except:
         names = set()
     for x in names:
-        if "NVIDIA" in x:
+        if "AMD" in x:
+            return False
+        elif "NVIDIA" in x:
             for b in blacklist:
                 if b in x:
                     return False
     return True

生的

Very good, it works well on my 6900xt. only need a simple

python main.py

can enjoying ComfyuUI.

patientx commented 5 months ago

Can this be integrated into the main already ? Have to keep editing the files all the time otherwise.

LeagueRaINi commented 5 months ago

Can this be integrated into the main already ? Have to keep editing the files all the time otherwise.

Kind of but not really, i have extra patches in my fork that injects code into custom nodes to keep them from re'enabling cudnn so that needs to be fixed somehow but i'm working on something else and dont have time rn, nor does there seem to be any interest in implementing this on the main branch

patientx commented 5 months ago

I forked it myself (not a dev don't know how to code just a curious fellow) , only two files needs to be changed now (as far as I know) , Modified the requirements.txt so it gets installed correctly for the 1st time. I also wrote detailed instructions, it is up-to-date as of today and I am going to try to keep it that way. So feel free to try it.

https://github.com/patientx/ComfyUI-Zluda

VeteranXT commented 1 month ago

Currently broken after merge https://github.com/patientx/ComfyUI-Zluda

patientx commented 1 month ago

Currently broken after merge https://github.com/patientx/ComfyUI-Zluda

In what way ? Updating ? The solution is on the [github page] (https://github.com/patientx/ComfyUI-Zluda#-whats-new-) . It is working after that update fix. In spite of that I also tried installing from zero and it is working without a hitch.

VeteranXT commented 1 month ago

Updating works fine. Just when i press start it auto closes soo fast i can't see error. Did following: git clone https://github.com/patientx/ComfyUI-Zluda git fetch --all git reset --hard origin/maste started lnstall.bat Started start.tab

also patch zluda dose not work Zluda is missing unpon installation.

patientx commented 1 month ago

delete everything and try from the start, I don't know whats happening. Others successfully updated in the last few hours / days.

VeteranXT commented 1 month ago

I did just that, ./zluda is not created on install. So i coplied mine. I've fixed it manually.

patientx commented 1 month ago

controlled the batch files and the zluda adress everything is all right, dunno what happened. check your bat files maybe they dont have the zluda lines somehow ...

VeteranXT commented 1 month ago

Its fixed. So no worries!

Kagamine-Rinrin commented 1 month ago

Updating works fine. Just when i press start it auto closes soo fast i can't see error. Did following: git clone https://github.com/patientx/ComfyUI-Zluda git fetch --all git reset --hard origin/maste started lnstall.bat Started start.tab

also patch zluda dose not work Zluda is missing unpon installation.

Maybe you can add "pause" at the end of the bat file This way you will prevent it close and then you can find the cause of the error

VeteranXT commented 1 month ago

Its working now!