comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
58.8k stars 6.24k forks source link

Compatibility Issue with PyTorch 2.1.2 and CUDA 11.4 #2509

Open gentlemarc opened 10 months ago

gentlemarc commented 10 months ago

I am encountering a compatibility issue when trying to run PyTorch 2.1.2 with CUDA 11.4 on my system. Despite the assertion in PyTorch GitHub discussions that CUDA 11.3 binaries are compatible with CUDA 11.4, I am unable to successfully run PyTorch with GPU support.

Environment:

OS: Ubuntu 23.4 (Virtual Machine) PyTorch Version: 2.1.2 Python Version: 3.11.4 CUDA Version: 11.4 GPU Model: NVIDIA Corporation GV100GL [Tesla V100 PCIe 16GB] (rev a1)

Steps to Reproduce:

  1. Installed PyTorch 2.1.2 with CUDA 12.1 support: pip install torch==2.1.2+cu121.
  2. Installed 'comfyui' as per the standard procedure.
  3. Ran main.py from the 'comfyui' package.
  4. Encountered a RuntimeError during the execution of main.py. The error log is as follows:

""" ComfyUI startup time: 2024-01-09 20:27:01.085492 Platform: Linux Python version: 3.11.4 (main, Dec 7 2023, 15:43:41) [GCC 12.3.0] Python executable: /home/ubuntu/venv/bin/python ** Log path: /home/ubuntu/ComfyUI/comfyui.log

Prestartup times for custom nodes: 0.0 seconds: /home/ubuntu/ComfyUI/custom_nodes/ComfyUI-Manager

Traceback (most recent call last): File "/home/ubuntu/ComfyUI/main.py", line 76, in import execution File "/home/ubuntu/ComfyUI/execution.py", line 13, in import nodes File "/home/ubuntu/ComfyUI/nodes.py", line 20, in import comfy.diffusers_load File "/home/ubuntu/ComfyUI/comfy/diffusers_load.py", line 4, in import comfy.sd File "/home/ubuntu/ComfyUI/comfy/sd.py", line 5, in from comfy import model_management File "/home/ubuntu/ComfyUI/comfy/model_management.py", line 118, in total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) ^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/ComfyUI/comfy/model_management.py", line 87, in get_torch_device return torch.device(torch.cuda.current_device()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/venv/lib/python3.11/site-packages/torch/cuda/init.py", line 769, in current_device _lazy_init() File "/home/ubuntu/venv/lib/python3.11/site-packages/torch/cuda/init.py", line 298, in _lazy_init torch._C._cudainit() """_

  1. After installation, running import torch; torch.cuda.is_available() in Python returned False.
NeedsMoar commented 10 months ago

What does CUDA 11.3 have to do with trying to run pytorch built against CUDA 12.1 on a system with 11.4 installed? Also this is something to bring up over at the torch wiki, not a bug with comfyUI; despite the fact that the bugs section here is filled almost entirely with irrelevant questions, bugs on extensions, and other randomness to the point that it might as well not exist, this isn't the place to get a config problem with torch fixed.