picobyte / stable-diffusion-webui-wd14-tagger

Labeling extension for Automatic1111's Web UI
572 stars 70 forks source link

Error if you install CUDA12.4? CUDA_PATH is set but CUDA wasnt able to be loaded. This plugin was working before I installed CUDA. #116

Open GUGU-YT opened 1 month ago

GUGU-YT commented 1 month ago

it didn't work. :( respond same error RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

I've tried to fix it for over 8h, but nothing worked. ONNX Runtime版本是1.18.1,CUDA版本是12.4,cuDNN版本是9.3.0,torch版本是2.4.0,torchaudio版本是2.4.0,torchvision版本是0.19.0,显卡驱动版本是555.85,为什么使用SD的WD 14 标签 这个插件会显示onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. 版本→version

C:\Users\Administrator>echo %CUDA_PATH% C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import torch print(torch.cuda.is_available()) True

CUDA_PATH C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

CUDA_PATH_V12_4 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import onnxruntime as ort providers = ort.get_available_providers() print(providers) ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

"I've checked everything thoroughly, following the official documentation, but I still can't use the reverse prompt suggestion plugin. Interestingly, this plugin was working before I installed CUDA. I'm wondering if the issue could be with the plugin itself?"

I also tried: https://github.com/toshiaki1729/dataset-tag-editor-standalone https://github.com/67372a/stable-diffusion-webui-wd14-tagger They respond the same error.

Maybe I can create a new environment using Conda that doesn't call CUDA? I'm not sure if this approach will work since this is my first time using Conda; before, I always ran everything directly on the local machine.

GUGU-YT commented 1 month ago

it didn't work. :( respond same error RuntimeError: D:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:891 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

I've tried to fix it for over 8h, but nothing worked. ONNX Runtime版本是1.18.1,CUDA版本是12.4,cuDNN版本是9.3.0,torch版本是2.4.0,torchaudio版本是2.4.0,torchvision版本是0.19.0,显卡驱动版本是555.85,为什么使用SD的WD 14 标签 这个插件会显示onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. 版本→version

C:\Users\Administrator>echo %CUDA_PATH% C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import torch print(torch.cuda.is_available()) True

CUDA_PATH C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

CUDA_PATH_V12_4 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

import onnxruntime as ort providers = ort.get_available_providers() print(providers) ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

"I've checked everything thoroughly, following the official documentation, but I still can't use the reverse prompt suggestion plugin. Interestingly, this plugin was working before I installed CUDA. I'm wondering if the issue could be with the plugin itself?"

I also tried: https://github.com/toshiaki1729/dataset-tag-editor-standalone https://github.com/67372a/stable-diffusion-webui-wd14-tagger They respond the same error.

Maybe I can create a new environment using Conda that doesn't call CUDA? I'm not sure if this approach will work since this is my first time using Conda; before, I always ran everything directly on the local machine.

把cuda版本降到11.8即可解决,但是还没试能不能加载依赖。

slashedstar commented 4 weeks ago

Same problem, can't get onnxruntime-gpu to work with any version whatsoever nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024 Cuda compilation tools, release 12.6, V12.6.20 Build cuda_12.6.r12.6/compiler.34431801_0

slashedstar commented 4 weeks ago

Ok finally fixed, install CUDNN and add C:\Program Files\NVIDIA\CUDNN\v9.3\bin\12.6 to PATH variable, I also have C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6 on both CUDA_PATH and PATH variables, it should find all the .dlls it requires Don't forget you need to restart the terminal for the environment paths to update, maybe also restart the PC if unsure, I installed onnxruntime-gpu with pip install onnxruntime-gpu==1.18.1 --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ --force-reinstall

Cheesper commented 3 weeks ago

Tried many versions CUDA + CUDNN. Worked on cuda_11.8.0_522.06 and CUDNN 8.9.7.29 I downloaded the cudnn from the archive, so I added the path to the folders "/bin" and "/lib/x64" in the PATH manually