florestefano1975 / ComfyUI-HiDiffusion

GNU General Public License v3.0
141 stars 9 forks source link

M1 Mac not using Metal instead of CUDA #10

Closed bildmeister closed 7 months ago

bildmeister commented 7 months ago

Hi Stefano, I tried to run HiDiffusion on M1 Mac and encounter the error listed below. I asked ChatGPT to make sure that Torch is using Metal instead of CUDA, which obviously I don't have. Something in the code seems to send my M1 into an error.

Error occurred when executing HiDiffusionSDXL:

Torch not compiled with CUDA enabled

File "/Users/studiomaster/AI/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/studiomaster/AI/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/studiomaster/AI/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/studiomaster/AI/ComfyUI/custom_nodes/ComfyUI-HiDiffusion/init.py", line 159, in hi_diff_sdxl pipe = StableDiffusionXLPipeline.from_single_file(ckpt_path, torch_dtype=torch.float16).to("cuda") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/diffusers/pipelines/pipeline_utils.py", line 418, in to module.to(device, dtype) File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1173, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 779, in _apply module._apply(fn) File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 804, in _apply param_applied = fn(param) ^^^^^^^^^ File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1159, in convert return t.to( ^^^^^ File "/Users/studiomaster/AI/venv/lib/python3.12/site-packages/torch/cuda/init.py", line 284, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled")

ChatGPT It seems like you're encountering the same error related to CUDA not being enabled in the Torch installation. This error occurs when the software attempts to use CUDA for GPU acceleration, but CUDA support is not available.

Since you're using an M1 Mac, which doesn't support CUDA, the software needs to be configured to use Metal for GPU acceleration instead. However, it seems that the software is attempting to use CUDA explicitly, leading to this error.

To resolve this issue, you may need to configure the software to use Metal instead of CUDA for GPU acceleration. This typically involves updating the configuration or settings within the software itself to use Metal when running on macOS.

If the software has specific documentation or settings related to GPU acceleration and compatibility with M1 Macs, I recommend consulting that documentation for guidance on configuring it properly for your system.

If you're unable to resolve the issue through the software's settings or documentation, you may need to reach out to the developers or community support channels for assistance. They may be able to provide more specific guidance or updates to address compatibility issues with M1 Macs.

florestefano1975 commented 7 months ago

CUDA cores from NVidia graphics cards are explicitly required.