comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
56.12k stars 5.94k forks source link

ComfyUI_Portable "Onnxruntime not found or doesn't come with acceleration providers" #4999

Open ZenRevision opened 1 month ago

ZenRevision commented 1 month ago

Expected Behavior

  1. Follow install instructions for portable version on windows
  2. Install ControlNet custom nodes
  3. Be able to start ComfyUI_Portable without error warnings that sound as if they will substantially reduce performance

Actual Behavior

Receive this very common error: UserWarning:Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly

This has been happening for nearly a year now. I have had to deal with this issue every time I have setup ComfyUI_Portable. And many other users (maybe everyone on Windows) of ComfyUI_Portable have also faced this same issue. I'm kind of stunned that this still hasn't been fixed.

Steps to Reproduce

  1. Follow install instructions for ComfyUI_Portable
  2. Install Control Net custom nodes for ComfyUI_Portable
  3. Start ComfyUI_Portable with GPU acceleration
  4. Recieve the error described above (line 40 in the provided log)

Apparently this is caused by version incompatibilities between torchvision/torchaudio and cuda. Now more recently there is an additional version incompatibility between onnxruntime and the version of torchvision/torchaudio that actually works. So both torchvision/torchaudio and onnxruntime must be downgraded to specific previous versions which so they can both play nice together, and not be mean to cuda either.

That is all pretty much way over my head though, so I really don't understand any of it. I have included the working fix below though, in simple terms, for the devs... as well as anyone else who runs into this issue and doesn't understand the technicalities behind it.

Debug Logs

## ComfyUI-Manager: installing dependencies done.
[2024-09-20 16:57] ** ComfyUI startup time: 2024-09-20 16:57:45.433118
[2024-09-20 16:57] ** Platform: Windows
[2024-09-20 16:57] ** Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
[2024-09-20 16:57] ** Python executable: N:\Comfy2\ComfyUI_windows_portable\python_embeded\python.exe
[2024-09-20 16:57] ** ComfyUI Path: N:\Comfy2\ComfyUI_windows_portable\ComfyUI
[2024-09-20 16:57] ** Log path: N:\Comfy2\ComfyUI_windows_portable\comfyui.log
[2024-09-20 16:57] 
Prestartup times for custom nodes:
[2024-09-20 16:57]    1.0 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-09-20 16:57] 
Total VRAM 12288 MB, total RAM 65291 MB
[2024-09-20 16:57] pytorch version: 2.4.1+cu124
[2024-09-20 16:57] Set vram state to: NORMAL_VRAM
[2024-09-20 16:57] Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
[2024-09-20 16:57] Using pytorch cross attention
[2024-09-20 16:57] [Prompt Server] web root: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\web
[2024-09-20 16:57] Adding extra search path checkpoints M:\A1111\stable-diffusion-webui\models/Stable-diffusion
[2024-09-20 16:57] Adding extra search path configs M:\A1111\stable-diffusion-webui\models/Stable-diffusion
[2024-09-20 16:57] Adding extra search path vae M:\A1111\stable-diffusion-webui\models/VAE
[2024-09-20 16:57] Adding extra search path loras M:\A1111\stable-diffusion-webui\models/Lora
[2024-09-20 16:57] Adding extra search path loras M:\A1111\stable-diffusion-webui\models/LyCORIS
[2024-09-20 16:57] Adding extra search path upscale_models M:\A1111\stable-diffusion-webui\models/ESRGAN
[2024-09-20 16:57] Adding extra search path upscale_models M:\A1111\stable-diffusion-webui\models/RealESRGAN
[2024-09-20 16:57] Adding extra search path upscale_models M:\A1111\stable-diffusion-webui\models/SwinIR
[2024-09-20 16:57] Adding extra search path embeddings M:\A1111\stable-diffusion-webui\embeddings
[2024-09-20 16:57] Adding extra search path hypernetworks M:\A1111\stable-diffusion-webui\models/hypernetworks
[2024-09-20 16:57] Adding extra search path controlnet M:\A1111\stable-diffusion-webui\models/ControlNet
[2024-09-20 16:57] N:\Comfy2\ComfyUI_windows_portable\python_embeded\Lib\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
[2024-09-20 16:57] ### Loading: ComfyUI-Impact-Pack (V7.5.2)
[2024-09-20 16:57] ### Loading: ComfyUI-Impact-Pack (Subpack: V0.6)
[2024-09-20 16:57] [Impact Pack] Wildcards loading done.
[2024-09-20 16:57] ### Loading: ComfyUI-Manager (V2.51)
[2024-09-20 16:57] ### ComfyUI Revision: 2710 [38c69080] *DETACHED | Released on '2024-09-20'
[2024-09-20 16:57] here: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox
[2024-09-20 16:57] Using ckpts path: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox\..\..\models\annotator
[2024-09-20 16:57] Using symlinks: False
[2024-09-20 16:57] Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider']
[2024-09-20 16:57] N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox\nodes\preprocessor\dwpose_node.py:28: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
[2024-09-20 16:57] ------------------------------------------
[2024-09-20 16:57] Comfyroll Studio v1.76 :  175 Nodes Loaded
[2024-09-20 16:57] ------------------------------------------
[2024-09-20 16:57] ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
[2024-09-20 16:57] ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
[2024-09-20 16:57] ------------------------------------------
[2024-09-20 16:57] [comfyui_controlnet_aux] | INFO -> Using ckpts path: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[2024-09-20 16:57] [comfyui_controlnet_aux] | INFO -> Using symlinks: False
[2024-09-20 16:57] [comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
[2024-09-20 16:57] 
Efficiency Nodes: Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...Success!
[2024-09-20 16:57] 
Import times for custom nodes:
[2024-09-20 16:57]    0.0 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
[2024-09-20 16:57]    0.0 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui
[2024-09-20 16:57]    0.0 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
[2024-09-20 16:57]    0.0 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
[2024-09-20 16:57]    0.1 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-tbox
[2024-09-20 16:57]    0.3 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
[2024-09-20 16:57]    1.2 seconds: N:\Comfy2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
[2024-09-20 16:57] 
[2024-09-20 16:57] Starting server

[2024-09-20 16:57] To see the GUI go to: http://127.0.0.1:8188
[2024-09-20 16:57] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[2024-09-20 16:57] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[2024-09-20 16:57] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[2024-09-20 16:57] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[2024-09-20 16:57] [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json

Other

How to fix:

root of comfyui, open terminal, and input each line followed by the enter key, waiting for each command to finish executing before entering the next line.

> python_embeded\python.exe -m pip uninstall torch torchvision torchaudio
> python_embeded\python.exe -m pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html

It will take a while to install (and if you're in China, it will probably fail repeatedly, but if you keep trying it may eventually install the working versions of the torch components).

Then:

> python_embeded\python.exe -m pip uninstall onnxruntime-gpu
> python_embeded\python.exe -m pip install onnxruntime-gpu==1.16.2

After that, when you start ComfyUI_Portable with nVidia GPU acceleration on Windows this warning: UserWarning:Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly Should be replaced with this message: DWPose: Onnxruntime with acceleration providers detected

The fact that this issue has been around for nearly a year at least (and has actually even gotten slightly worse now -- last time, I'm pretty sure I only needed the first part of the above fix), is kind of discouraging. This is a massive stumbling block for anyone who's not somewhat technically capable (and a major inconvenience even for people who are capable of tracking down the fix). I have tripped over this issue every single time.

I love ComfyUI, and have tremendous gratitude for everyone who has helped make ComfyUI possible. I just think this really should be fixed, or at least provide a script that can check for these critical version incompatibilities and resolve them in a more automated manner.

LukeG89 commented 1 month ago

For CUDA 12.x version, you need to use this:

python_embeded\python.exe -m pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

I got it from their site: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x

It worked to me, I have a RTX 3070Ti

LukeG89 commented 1 month ago

And there is no need to downgrade pytorch, you can go back to 2.4.1+cu124

ZenRevision commented 1 month ago

For CUDA 12.x version, you need to use this:

python_embeded\python.exe -m pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

I got it from their site: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x

It worked to me, I have a RTX 3070Ti

And there is no need to downgrade pytorch, you can go back to 2.4.1+cu124

Thanks! Seems that's a better solution then.

Unfortunately though, I'm definitely not the only one to find the fix I supplied. I copied it from some of the top search results. Because these fixes go back nearly a year, and have been linked many times, they are highly ranked by search engines. While not the best solution, what I posted basically also does work too, so when people find that fix, and it works, they probably are going to take it, even though there is a better fix.

As I said before, this is kind of over my head, is this something ComfyUI can incorporate in the default portable setup? Because if so, doesn't it seem like that would be best?

ltdrdata commented 1 month ago

For CUDA 12.x version, you need to use this:

python_embeded\python.exe -m pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/

I got it from their site: https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x It worked to me, I have a RTX 3070Ti And there is no need to downgrade pytorch, you can go back to 2.4.1+cu124

Thanks! Seems that's a better solution then.

Unfortunately though, I'm definitely not the only one to find the fix I supplied. I copied it from some of the top search results. Because these fixes go back nearly a year, and have been linked many times, they are highly ranked by search engines. While not the best solution, what I posted basically also does work too, so when people find that fix, and it works, they probably are going to take it, even though there is a better fix.

As I said before, this is kind of over my head, is this something ComfyUI can incorporate in the default portable setup? Because if so, doesn't it seem like that would be best?

ComfyUI does not depend on onnxruntime, so it cannot handle this issue. This issue needs to be resolved in the custom node that deals with this dependency.