Closed Sostay closed 9 months ago
It's just a replacement for "DWPose doesn't support CUDA out-of-the block"
It's just a replacement for "DWPose doesn't support CUDA out-of-the block"
E:\Stable Diffusion\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
Looks CUDA related. Just ran into this myself. Onnx site only has support for 11.8 CUDA listed. New ComfyUI is using 1.21 (I think since that's what its downloading now).
Not sure this will get fixed till Onnx does something on their side.
After installed onnxruntime-gpu 1.16.1——
DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... EP Error D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\Stable_Diffusion\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. 2
@Sostay "Falling back" is not an error
@Sostay "Falling back" is not an error
After installing onnxruntime (regardless of GPU version or CPU version), there is an error message. It seems that there is no need to install onnxruntime, and then just ignore the fallback?
just ignore the fallback?
As I said, "fallback" is not an error, but if it has "Failed to create CUDAExecutionProvider" or "Failed to create ROCMExecutionProvider" then it is
I installed like this
and i get this error when i open comfy
I have same problems, dw keep using cpu
I'm on confy portable cu121
same
+1
I've installed TensorRT and downgraded torch
to use cu118
and also reinstalled onnxruntime-gpu
.
InvokeAI still uses cu118
, and Comfy also works normally with it.
No errors nor fallbacks:
I did this because there's no cu121
listed here nor any of the 12.x versions:
https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
Same here... but: Windows 11
C:\Users\booty>nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022 Cuda compilation tools, release 11.8, V11.8.89 Build cuda_11.8.r11.8/compiler.31833905_0
onnx 1.14.1 onnxruntime-gpu 1.16.1
ComfyUI Revision: 1587 [f8caa24b] | Released on '2023-10-17'
... which I thought was supposed to be compatible with ONNXRuntime.
@illuculent it is compatible with ORT: https://onnxruntime.ai/docs/get-started/with-windows.html#windows-os-integration
+1 I've installed TensorRT and downgraded
torch
to usecu118
and also reinstalledonnxruntime-gpu
. InvokeAI still usescu118
, and Comfy also works normally with it.No errors nor fallbacks:
I did this because there's no
cu121
listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
any tips on how to do this safely with the portable version?
+1 I've installed TensorRT and downgraded
torch
to usecu118
and also reinstalledonnxruntime-gpu
. InvokeAI still usescu118
, and Comfy also works normally with it. No errors nor fallbacks: I did this because there's nocu121
listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirementsany tips on how to do this safely with the portable version?
Portable version of? Safe from what? Please elaborate.
What steps did you use?On Oct 28, 2023, at 10:24 AM, kenny-kvibe @.***> wrote:
+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it. No errors nor fallbacks: I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
any tips on how to do this safely with the portable version?
I have the portable version, what do you mean by "safely" ?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
TensorRT: https://developer.nvidia.com/tensorrt PyTorch: https://pytorch.org/get-started/locally ComfyUI: https://github.com/comfyanonymous/ComfyUI#manual-install-windows-linux ORT-GPU: https://onnxruntime.ai/docs/install/#python-installs
@haqthat I didn't use any steps
Make sure you have everything in the system PATH variable. Or if you don't want it in the system PATH, create a script to have PATH changed only in it:
Windows (batch) File:
run_comfy.bat
@ECHO off SETLOCAL
SET "PATH=X:\path\to\missing\files;%PATH%" CD %~dp0ComfyUI python main.py
ENDLOCAL EXIT /B 0
> Linux (bash)
> File: `run_comfy.sh` - if you can't run it, add execute permissions with `chmod +x run_comfy.sh`
```bash
#!/usr/bin/env bash
cd `dirname $0`/ComfyUI
PATH="/path/to/missing/files:$PATH" python main.py
return 0
And place this script in the same folder where your ComfyUI folder is.
Do this ^ if it says that some program you installed is missing or not found.
I don't know your exact issue so my answers are about what I think your issue is. Take some time and look through the terminal when running Comfy, it'll tell you everything thats wrong and go from there.
@Fannovel16 like you explain in an other post, I added in comfyui_controlnet_aux/requirements.txt onnxruntime-gpu onnxruntime-directml onnxruntime-openvino
:) Now I have both acceleration, CPUs and GPU run at 100% and fans also... But still an error at startup with onnxruntime_providers_openvino.dll I'm not developper and I don't know how to fix it.
_DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll" when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying. EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providersopenvino.dll" when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying.
Not a double copy/paste, same error is showed 2 times like this.
Full startup:
_D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build ** ComfyUI start up time: 2023-11-02 09:11:24.926201
Prestartup times for custom nodes: 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Total VRAM 12287 MB, total RAM 49135 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync VAE dtype: torch.bfloat16 Using pytorch cross attention
Registered sys.path: ['D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\ComfyUI_windows_portable\ComfyUI\comfy', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\git\ext\gitdb', 'D:\ComfyUI_windows_portable\ComfyUI', 'D:\ComfyUI_windows_portable\python_embeded\python310.zip', 'D:\ComfyUI_windows_portable\python_embeded', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32\lib', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\Pythonwin', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack', '../..'] DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll" when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying. EP Error C:\Users\Administrator\Desktop\validation_1.16\onnxruntime\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\onnxruntime\capi\onnxruntime_providers_openvino.dll" when using ['OpenVINOExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CPUExecutionProvider'] and retrying. DWPose: Sessions cached FizzleDorf Custom Nodes: Loaded [tinyterraNodes] Loaded
Import times for custom nodes: 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\IPAdapter-ComfyUI 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-animatediff 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite 0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_tinyterraNodes 0.5 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_FizzNodes 0.7 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager 1.2 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 3.6 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux
Starting server
To see the GUI go to: http://127.0.0.1:8188 FETCH DATA from: D:\ComfyUI_windows_portable\ComfyUI\customnodes\ComfyUI-Manager\extension-node-map.json
+1 I've installed TensorRT and downgraded
torch
to usecu118
and also reinstalledonnxruntime-gpu
. InvokeAI still usescu118
, and Comfy also works normally with it.No errors nor fallbacks:
I did this because there's no
cu121
listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
How to downgrade the Cuda from 12.1 to 11.8?
+1 I've installed TensorRT and downgraded
torch
to usecu118
and also reinstalledonnxruntime-gpu
. InvokeAI still usescu118
, and Comfy also works normally with it. No errors nor fallbacks: I did this because there's nocu121
listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirementsHow to downgrade the Cuda from 12.1 to 11.8?
activate virtual environment, uninstall torch, install torch+cu118 with command from https://pytorch.org/
Hello, this is not a error, just because TensorRT not natively support these models, maybe you can find the answer from issue#82
Does it supports acceleration of apple silicon?
I got the information when I startup comfyUI:
/comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
Then I install the onnxruntime-silicon
which is the onnxruntime for apple silicon:
https://github.com/cansik/onnxruntime-silicon
but onnxruntime still can not be found.
Comfyroll Custom Nodes: Loaded
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /home/sky/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
/home/sky/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
/home/sky/ComfyUI/custom_nodes/failfast-comfyui-extensions/extensions
/home/sky/ComfyUI/web/extensions/failfast-comfyui-extensions
WAS Node Suite: BlenderNeko's Advanced CLIP Text Encode found, attempting to enable CLIPTextEncode
support.
WAS Node Suite: CLIPTextEncode (BlenderNeko Advanced + NSP)
node enabled under WAS Suite/Conditioning
menu.
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite: ffmpeg_bin_path
is set to: /usr/bin/ffmpeg
WAS Node Suite: Finished. Loaded 198 nodes successfully.
I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:
pip uninstall torch torchvision torchaudio
pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
onnxruntime-gpu
pip install onnxruntime-gpu
Now I see DWPose: Onnxruntime with acceleration providers detected
🎉
To use CUDA 12.* instead of 11.8, you can try install nightly binary like the following (for Python 3.8~3.11):
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu
saved my life, thank you!
for late comer: here's the way to enable gpu accelerate on cuda12.x
track the issue here for version changes: https://github.com/microsoft/onnxruntime/issues/13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly
#with cu12.*
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:
- Reinstalling PyTorch for CUDA 11.8 within my virtual environment
pip uninstall torch torchvision torchaudio pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
- Installing
onnxruntime-gpu
pip install onnxruntime-gpu
Now I see
DWPose: Onnxruntime with acceleration providers detected
🎉
My device is RTX3080Ti which matching CUDA11.7,but I found that the onnx package only have CUDA11.8 or 11.6 version.And I follow the steps it doesn't work.What should I do?
@izonewonyoung, pip install onnxruntime-gpu
shall work with CUDA 11.6~11.8 in Windows and Linux. Please make sure you also install other dependencies like latest cuDNN for CUDA 11, and Windows need latest VC DLLs.
EP Error A:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. I am having the above problem when using rembg with comfyUI and it is running very slow. Has this been resolved now?
@Zakutu, if you intend to use TensorRT EP, please install TensorRT 8.6.1 for CUDA 11 (since official onnxruntime-gpu is for CUDA 11 right now).
Please refer to https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/. It is a demo of using TRT EP (or CUDA EP) with stable diffusion.
For anyone reading this thread looking for a solution for Apple Silicon, try cansik/onnxruntime-silicon.
Install:
pip install onnxruntime-silicon
On start up:
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /Users/griff/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
Running the DWPose Estimator on a 512x768 image (M1 Max/Sonoma 14.1.2):
DWPose: Using yolox_l.onnx for bbox detection and dw-ll_ucoco_384_bs5.torchscript.pt for pose estimation
DWPose: Caching ONNXRuntime session yolox_l.onnx...
DWPose: Caching TorchScript module dw-ll_ucoco_384_bs5.torchscript.pt on ...
DWPose: Bbox 436.91ms
DWPose: Pose 383.29ms on 1 people
EP Error A:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying. I am having the above problem when using rembg with comfyUI and it is running very slow. Has this been resolved now?
→解決しました。原因は同一環境でのpythonの競合でした。
既存のpython→3.10.6 python 3.11 cuda12.X, → error python 3.10.6, cuda11.8 , cuDNN 最新 → OK
cu118のバージョンでcomfyUIportableを構築したら、なにもエラーや警告文無く動作しました。
▶assetから旧バージョンをダウンロードします。
It seems CUDA 12 packages came out just three days ago (as of this writing).
All I had to do to make it work was to install the CUDA 12 version of the ONNX runtime.
Hope this helps! 🙏
I'm running:
Windows 10 Pro: 10.0.19045
Python: 3.11.6
Pip: 23.3.2
GPU: NVIDIA GeForce GTX 980 Ti (🙈)
If I activate my venv
and run python -c "import torch; print(torch.__version__); print(torch.version.cuda)"
, I get:
2.1.2+cu121
12.1
If somebody has any wrong with onnxruntime 1.17.0 and onnxruntime-gpu 1.17.0, you can try to Install them separately (no gpu version first, then gpu version second) https://github.com/Fannovel16/comfyui_controlnet_aux/issues/242#issuecomment-1929110274
It seems CUDA 12 packages came out just three days ago (as of this writing).
All I had to do to make it work was to install the CUDA 12 version of the ONNX runtime.
Hope this helps! 🙏
Some background
I'm running:
Windows 10 Pro: 10.0.19045 Python: 3.11.6 Pip: 23.3.2 GPU: NVIDIA GeForce GTX 980 Ti (🙈)
If I activate my
venv
and runpython -c "import torch; print(torch.__version__); print(torch.version.cuda)"
, I get:2.1.2+cu121 12.1
Thank you very much, this solved the warning !
Anyway to get this to run on the windows comfy UI portable.. I'm running 12.3 as well..
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
Run with python embedded command. For example mine is CUDA 12.3:
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/
pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
Check the screenshot below.
See https://onnxruntime.ai/docs/install/
You can install like the following:
pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
for late comer: here's the way to enable gpu accelerate on cuda12.x
track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly
#with cu12.* pip install coloredlogs flatbuffers numpy packaging protobuf sympy pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/ pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
THX!!
I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:我通过将PyTorch v11.8与我当前的CUDA(v12.3)并排安装来解决这个问题,并且:
- Reinstalling PyTorch for CUDA 11.8 within my virtual environment在虚拟环境中重新安装PyTorch for CUDA 11.8
pip uninstall torch torchvision torchaudio pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
- Installing
onnxruntime-gpu
安装onnxruntime-gpu
pip install onnxruntime-gpu
Now I see
DWPose: Onnxruntime with acceleration providers detected
🎉我看到了DWPose: Onnxruntime with acceleration providers detected
disposeMore detailed walk-through on civitai.com 更多详细介绍请访问civitai.com
It doesn't solve the problem
Another scenario: you have installed both OnnxRuntme and OnnxRuntme-GPU versions, OnnxRuntme is run by default, just uninstall OnnxRuntme and keep the gpu version, I hope it will help you~
[Fix-Tip] long story short - it works with Cuda12! - my problem was this 3 folders: onnxruntime onnxruntime_gpu-1.18.0.dist-info onnxruntime-1.18.0.dist-info (location : venv\Lib\site-packages)
[Fix] i just deleted all 3 of them and RE download them when in venv mod using :
[Why] this bug happened when i installed other custom_nodes(in my case easy-comfy-nodes) that overwritten some of comfyui_controlnet_aux requirements (makes the - "DWPose might run very slowly" warning to re appear.
if you dont have venv (python virtual environment) installed ,close and exit comfyui then- in main confyui folder go cmd - make sure that you in main confyui directory then type this:
[Fix-Tip] long story short - it works with Cuda12! - my problem was this 3 folders: onnxruntime onnxruntime_gpu-1.18.0.dist-info onnxruntime-1.18.0.dist-info (location : venv\Lib\site-packages)
[Fix] i just deleted all 3 of them and RE download them when in venv mod using :
- pip install onnxruntime-gpu
- pip install onnxruntime
[Why] this bug happened when i installed other custom_nodes(in my case easy-comfy-nodes) that overwritten some of comfyui_controlnet_aux requirements (makes the - "DWPose might run very slowly" warning to re appear.
if you dont have venv (python virtual environment) installed ,close and exit comfyui then- in main confyui folder go cmd - make sure that you in main confyui directory then type this:
- python -m venv venv
- call ./venv/scripts/activate
- pip install onnxruntime
- pip install onnxruntime-gpu
- good luck
Not works to me
for late comer: here's the way to enable gpu accelerate on cuda12.x track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly
#with cu12.* pip install coloredlogs flatbuffers numpy packaging protobuf sympy pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/ pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
THX!!
Works to me , thx
The following fixed the error for me on W10, using the Windows portable version for nvidia GPUs via Powershell:
1) cd
to project root
2) run .\python_embeded\python.exe -s -m pip install onnxruntime-gpu
You have to make sure the embedded python distro (3.10) installs the dependency, hence the invocation using the embedded python .exe. It may not find the dep if installed using your command line environment.
for late comer: here's the way to enable gpu accelerate on cuda12.x
track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly
#with cu12.* pip install coloredlogs flatbuffers numpy packaging protobuf sympy pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/ pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
Thx!!!this is way
for late comer: here's the way to enable gpu accelerate on cuda12.x
track the issue here for version changes: microsoft/onnxruntime#13932 runtime nightly: https://dev.azure.com/onnxruntime/onnxruntime/_artifacts/feed/onnxruntime-cuda-12 ort nightly:https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ort-cuda-12-nightly
#with cu12.* pip install coloredlogs flatbuffers numpy packaging protobuf sympy pip install ort-nightly-gpu --index-url=https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-12-nightly/pypi/simple/ pip install onnxruntime-gpu==1.17.0 --index-url=https://pkgs.dev.azure.com/onnxruntime/onnxruntime/_packaging/onnxruntime-cuda-12/pypi/simple/
OMFG! Thank you very much! Worked for me. 😁
What is the problem? It seems that opencv is not running in the normal way. Does anyone know how to solve it?
E:\Stable Diffusion\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow) warnings.warn("Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow)")