Closed Selur closed 2 years ago
Can you shed some light on what runtime (onnxruntime, onnxruntime-gpu and onnxruntime-directml) is need for which provider?
CUDA and TensorRT require onnxruntime-gpu
, while DirectML requires onnxruntime-directml
.
2022-03-19 20:14:30.0977965 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "Das angegebene Modul wurde nicht gefunden." when trying to load "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2022-03-19 20:14:30.0978594 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met. Vapoursynth preview error: 2022-03-19 20:14:30.3973685 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "Das angegebene Modul wurde nicht gefunden." when trying to load "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2022-03-19 20:14:30.3974014 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
LoadLibrary failed with error 126
means some dependent DLLs are missing, like Visual C++ 2019 runtime, CUDA SDK, or cuDNN. Visit https://github.com/HolyWu/vs-dpir/discussions/19 for installation steps.
When I uninstall 'onnxruntime-gpu', install 'onnxruntime' and use provider=1, no error appears. When I uninstall 'onnxruntime', install 'onnxruntime-directml' and use provider=1, no error appears.
Actually there is a warning message generated, at least when using vspipe, something like Warning: C:\Python39\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:55: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'
. If you didn't see it then it means the application you are using doesn't capture the messages.
-> Is there a downside of using onnxruntime-directml ?
It's slower compared CUDA and/or TensorRT. DirectML is mainly for AMD and Intel hardwares since they don't support CUDA.
Also using provider=0 and provider=1 doesn't seem to make a difference. From the looks of it cpu&gpu usage both times is the same.
Because ONNX Runtime automatically falls back to use CPU for inferencing when CUDA Execution Provider (provider=1) is not available.
So is:
Visit https://github.com/HolyWu/vs-dpir/discussions/19 for installation steps.
Okay, since I'm using a portable Vapoursynth and I'd like to keep it portable. Do you know a way to install CUDA SDK in a portable way without having to install the SDK in the system? Atm. my enviroment got
python -m pip list
Package Version
-------------------- ------------
addict 2.4.0
bpyutils 0.2.0
certifi 2021.10.8
charset-normalizer 2.0.7
colorama 0.4.4
Cython 0.29.28
flatbuffers 2.0
idna 3.3
mmcv-full 1.3.16
numpy 1.22.1
onnxruntime-gpu 1.10.0
opencv-python 4.5.4.58
packaging 21.2
Pillow 8.4.0
pip 22.0.4
protobuf 3.19.4
pyparsing 2.4.7
PyYAML 6.0
regex 2021.11.2
requests 2.26.0
setuptools 58.2.0
timm 0.4.12
torch 1.10.0+cu111
torchvision 0.11.1
tqdm 4.62.3
typing-extensions 3.10.0.2
urllib3 1.26.7
VapourSynth 57
VapourSynth-portable 57
vsbasicvsrpp 1.4.1
vsdpir 2.0.0
vsgan 1.6.4
vshinet 1.0.0
vsrealesrgan 2.0.0
vsrife 2.0.0
vsswinir 1.0.0
vsutil 0.6.0
wheel 0.37.0
yapf 0.31.0
installed.
Do you know a way to install CUDA SDK in a portable way without having to install the SDK in the system?
You have to copy all the DLLs from <CUDA installation directory\bin> and <cuDNN directory\bin> to another directory where the path is in system PATH
.
I read https://github.com/HolyWu/vs-dpir/discussions/19 and did the following:
I installed the CUDA SDK runtimes and cudnn runtimes:
Verzeichnis von c:\Program Files\NVIDIA\CUDNN\v8.3\bin
20.03.2022 08:25 <DIR> .
20.03.2022 08:25 <DIR> ..
05.01.2022 08:18 237.568 cudnn64_8.dll
05.01.2022 08:36 129.872.896 cudnn_adv_infer64_8.dll
05.01.2022 08:46 97.293.824 cudnn_adv_train64_8.dll
05.01.2022 09:15 736.718.848 cudnn_cnn_infer64_8.dll
05.01.2022 09:21 81.487.360 cudnn_cnn_train64_8.dll
05.01.2022 08:23 88.405.504 cudnn_ops_infer64_8.dll
05.01.2022 08:28 70.403.584 cudnn_ops_train64_8.dll
7 Datei(en), 1.204.419.584 Bytes
2 Verzeichnis(se), 328.608.882.688 Bytes frei
Verzeichnis von C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin
20.03.2022 08:16 <DIR> .
20.03.2022 08:16 <DIR> ..
03.12.2021 05:03 149.362.176 cublas64_11.dll
03.12.2021 05:03 312.164.864 cublasLt64_11.dll
18.12.2021 04:06 435.200 cudart32_110.dll
18.12.2021 04:06 509.440 cudart64_110.dll
11.02.2022 04:48 361.522.688 cufft64_10.dll
11.02.2022 04:48 288.768 cufftw64_10.dll
18.12.2021 04:25 61.392.896 curand64_10.dll
11.02.2022 07:32 267.136.512 cusolver64_11.dll
11.02.2022 07:32 158.771.200 cusolverMg64_11.dll
11.02.2022 06:03 251.379.200 cusparse64_11.dll
11.02.2022 05:03 275.456 nppc64_11.dll
11.02.2022 05:03 13.085.184 nppial64_11.dll
11.02.2022 05:03 4.994.560 nppicc64_11.dll
11.02.2022 05:03 8.497.664 nppidei64_11.dll
11.02.2022 05:03 73.144.832 nppif64_11.dll
11.02.2022 05:03 30.545.408 nppig64_11.dll
11.02.2022 05:03 7.096.832 nppim64_11.dll
11.02.2022 05:03 32.928.256 nppist64_11.dll
11.02.2022 05:03 248.320 nppisu64_11.dll
11.02.2022 05:03 3.146.752 nppitc64_11.dll
11.02.2022 05:03 15.874.048 npps64_11.dll
03.12.2021 05:03 344.064 nvblas64_11.dll
11.02.2022 05:22 3.676.160 nvjpeg64_11.dll
11.02.2022 04:20 7.207.424 nvrtc-builtins64_116.dll
11.02.2022 04:20 33.174.528 nvrtc64_112_0.dll
25 Datei(en), 1.797.202.432 Bytes
2 Verzeichnis(se), 328.610.213.888 Bytes frei
then I copied the cudnn*.dll to the "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin" folder
Verzeichnis von C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin
20.03.2022 08:30 <DIR> .
20.03.2022 08:16 <DIR> ..
03.12.2021 05:03 149.362.176 cublas64_11.dll
03.12.2021 05:03 312.164.864 cublasLt64_11.dll
18.12.2021 04:06 435.200 cudart32_110.dll
18.12.2021 04:06 509.440 cudart64_110.dll
05.01.2022 08:18 237.568 cudnn64_8.dll
05.01.2022 08:36 129.872.896 cudnn_adv_infer64_8.dll
05.01.2022 08:46 97.293.824 cudnn_adv_train64_8.dll
05.01.2022 09:15 736.718.848 cudnn_cnn_infer64_8.dll
05.01.2022 09:21 81.487.360 cudnn_cnn_train64_8.dll
05.01.2022 08:23 88.405.504 cudnn_ops_infer64_8.dll
05.01.2022 08:28 70.403.584 cudnn_ops_train64_8.dll
11.02.2022 04:48 361.522.688 cufft64_10.dll
11.02.2022 04:48 288.768 cufftw64_10.dll
18.12.2021 04:25 61.392.896 curand64_10.dll
11.02.2022 07:32 267.136.512 cusolver64_11.dll
11.02.2022 07:32 158.771.200 cusolverMg64_11.dll
11.02.2022 06:03 251.379.200 cusparse64_11.dll
11.02.2022 05:03 275.456 nppc64_11.dll
11.02.2022 05:03 13.085.184 nppial64_11.dll
11.02.2022 05:03 4.994.560 nppicc64_11.dll
11.02.2022 05:03 8.497.664 nppidei64_11.dll
11.02.2022 05:03 73.144.832 nppif64_11.dll
11.02.2022 05:03 30.545.408 nppig64_11.dll
11.02.2022 05:03 7.096.832 nppim64_11.dll
11.02.2022 05:03 32.928.256 nppist64_11.dll
11.02.2022 05:03 248.320 nppisu64_11.dll
11.02.2022 05:03 3.146.752 nppitc64_11.dll
11.02.2022 05:03 15.874.048 npps64_11.dll
03.12.2021 05:03 344.064 nvblas64_11.dll
11.02.2022 05:22 3.676.160 nvjpeg64_11.dll
11.02.2022 04:20 7.207.424 nvrtc-builtins64_116.dll
11.02.2022 04:20 33.174.528 nvrtc64_112_0.dll
32 Datei(en), 3.001.622.016 Bytes
2 Verzeichnis(se), 327.408.144.384 Bytes frei
(I did not install TensorRT since I use a Geforce GTX 1070ti which has no tensor cores.)
I then uninstalled and reinstalled onnxruntime-gpu. But using: `clip = DPIR(clip=clip, strength=50.000, task="deblock", provider=1, device_id=0)' still gives me:
2022-03-20 08:33:53.7761002 [E:onnxruntime:Default, provider_bridge_ort.cc:995 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "Das angegebene Modul wurde nicht gefunden." when trying to load "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll" 2022-03-20 08:33:53.7761675 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
I also tried copying all the dlls into the "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi\" folder with the result, that no errors appeared, but nothing happened. Opening the script with vspipe gives: " Could not load library cudnn_ops_infer64_8.dll. Error code 126 Please make sure cudnn_ops_infer64_8.dll is in your library path! "
-> do you know a way to get this working without changing the global variables of the system? using:
import os
import sys
sys.path.insert(0, os.path.abspath('I:/Hybrid/64bit/Vapoursynth/Lib/site-packages/onnxruntime/capi')) # which contains all the libraries
doesn't work. ;)
(side note: SDK only set CUDA_PATH and did not modify PATH)
do you know a way to get this working without changing the global variables of the system?
Copying these DLLs to the same directory as vspipe.exe or vsedit.exe should work, but it pollutes the directory. :D
(side note: SDK only set CUDA_PATH and did not modify PATH)
No. The CUDA installer also adds C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin
and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\libnvvp
to system PATH
for me.
By the way, you should use cuDNN v8.2.4 as listed in the table. Using different version of cuDNN is likely to cause incompatibility.
added the 'I:/Hybrid/64bit/Vapoursynth/Lib/site-packages/onnxruntime/capi' to the PATH and restarted the system, still I get:
I:\Hybrid\64bit\Vapoursynth>VSPipe.exe c:\Users\Selur\Desktop\test.vpy e:\test.y4m
Could not load library cudnn_cnn_infer64_8.dll. Error code 126
Please make sure cudnn_cnn_infer64_8.dll is in your library path!
'I:/Hybrid/64bit/Vapoursynth/Lib/site-packages/onnxruntime/capi'
checking the path:
I:\Hybrid\64bit\Vapoursynth>PATH
PATH=C:\Python37\Scripts\;C:\Python37\;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\Kensington\TrackballWorks;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR;C:\Windows\System32;C:\Windows;C:\Windows\System32\wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\Calibre2\;C:\Program Files (x86)\GnuPG\bin;C:\Program Files\PuTTY\;C:\Program Files\Git\cmd;I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi;"C:\Users\Selur\AppData\Local\Microsoft\WindowsApps;";
with
Verzeichnis von I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\onnxruntime\capi
20.03.2022 08:39 <DIR> .
20.03.2022 08:32 <DIR> ..
20.03.2022 08:39 27.950.512 capi.rar
03.12.2021 05:03 149.362.176 cublas64_11.dll
03.12.2021 05:03 312.164.864 cublasLt64_11.dll
18.12.2021 04:06 435.200 cudart32_110.dll
18.12.2021 04:06 509.440 cudart64_110.dll
05.01.2022 08:18 237.568 cudnn64_8.dll
05.01.2022 08:36 129.872.896 cudnn_adv_infer64_8.dll
05.01.2022 08:46 97.293.824 cudnn_adv_train64_8.dll
05.01.2022 09:15 736.718.848 cudnn_cnn_infer64_8.dll
05.01.2022 09:21 81.487.360 cudnn_cnn_train64_8.dll
05.01.2022 08:23 88.405.504 cudnn_ops_infer64_8.dll
05.01.2022 08:28 70.403.584 cudnn_ops_train64_8.dll
11.02.2022 04:48 361.522.688 cufft64_10.dll
11.02.2022 04:48 288.768 cufftw64_10.dll
18.12.2021 04:25 61.392.896 curand64_10.dll
11.02.2022 07:32 267.136.512 cusolver64_11.dll
11.02.2022 07:32 158.771.200 cusolverMg64_11.dll
11.02.2022 06:03 251.379.200 cusparse64_11.dll
11.02.2022 05:03 275.456 nppc64_11.dll
11.02.2022 05:03 13.085.184 nppial64_11.dll
11.02.2022 05:03 4.994.560 nppicc64_11.dll
11.02.2022 05:03 8.497.664 nppidei64_11.dll
11.02.2022 05:03 73.144.832 nppif64_11.dll
11.02.2022 05:03 30.545.408 nppig64_11.dll
11.02.2022 05:03 7.096.832 nppim64_11.dll
11.02.2022 05:03 32.928.256 nppist64_11.dll
11.02.2022 05:03 248.320 nppisu64_11.dll
11.02.2022 05:03 3.146.752 nppitc64_11.dll
11.02.2022 05:03 15.874.048 npps64_11.dll
03.12.2021 05:03 344.064 nvblas64_11.dll
11.02.2022 05:22 3.676.160 nvjpeg64_11.dll
11.02.2022 04:20 7.207.424 nvrtc-builtins64_116.dll
11.02.2022 04:20 33.174.528 nvrtc64_112_0.dll
20.03.2022 08:32 3.795 onnxruntime_collect_build_info.py
20.03.2022 08:32 36.313 onnxruntime_inference_collection.py
20.03.2022 08:32 358.640.552 onnxruntime_providers_cuda.dll
20.03.2022 08:32 20.400 onnxruntime_providers_shared.dll
20.03.2022 08:32 3.444.648 onnxruntime_providers_tensorrt.dll
20.03.2022 08:32 15.809.456 onnxruntime_pybind11_state.pyd
20.03.2022 08:32 6.299 onnxruntime_validation.py
20.03.2022 08:32 <DIR> training
20.03.2022 08:32 76 version_info.py
20.03.2022 08:32 413 _ld_preload.py
20.03.2022 08:32 837 _pybind_state.py
20.03.2022 08:32 251 __init__.py
20.03.2022 08:32 <DIR> __pycache__
44 Datei(en), 3.407.535.568 Bytes
4 Verzeichnis(se), 97.678.790.656 Bytes frei
-> seems like 'PATH' isn't the enviroment vairable that is checked :/
By the way, you should use cuDNN v8.2.4 as listed in the table. Using different version of cuDNN is likely to cause incompatibility.
-> I'll try that ( i simply used the latest version)
No. The CUDA installer also adds C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin and C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\libnvvp to system PATH for me.
Not here. Note that I only installed the runtime in the installer.
You were right the versions were the issue. I
CUDA\v11.4\bin
into Vapoursynth\Lib\site-packages\onnxruntime\capi
-> now clip = DPIR(clip=clip, strength=50.000, task="deblock", provider=1, device_id=0)
works!
Thanks!
Okay, now I got a follow up question: which TensorRT download should one use from https://developer.nvidia.com/nvidia-tensorrt-8x-download?
Using: TensorRT 8.2 GA Update 2
I get:
2022-03-20 10:58:06.3199329 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2022-03-20 09:58:06 ERROR] 1: [stdArchiveReader.cpp::nvinfer1::rt::StdArchiveReader::StdArchiveReader::54] Error Code 1: Serialization (Serialization assertion sizeRead == static_cast(mEnd - mCurrent) failed.Size specified in header does not match archive size) 2022-03-20 10:58:06.3200361 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2022-03-20 09:58:06 ERROR] 4: [runtime.cpp::nvinfer1::Runtime::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
are these the right libaries and the issue is that my gpu doesn't support tensorrt or do I use the wrong libraries?
You need TensorRT 8.0 GA Update 1
as listed in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements.
Thanks!
Thanks!
TensorRT 8.0 GA Update 1
-> TensorRT-8.0.3.4.Windows10.x86_64.cuda-11.3.cudnn8.2.zip
gives me:
2022-03-20 12:20:38.3149976 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2022-03-20 11:20:38 ERROR] 1: [stdArchiveReader.cpp::nvinfer1::rt::StdArchiveReader::StdArchiveReader::34] Error Code 1: Serialization (Serialization assertion safeVersionRead == safeSerializationVersion failed.Version tag does not match. Note: Current Version: 43, Serialized Engine Version: 0) 2022-03-20 12:20:38.3151044 [E:onnxruntime:Default, tensorrt_execution_provider.h:51 onnxruntime::TensorrtLogger::log] [2022-03-20 11:20:38 ERROR] 4: [runtime.cpp::nvinfer1::Runtime::deserializeCudaEngine::76] Error Code 4: Internal Error (Engine deserialization failed.)
but that is probably because my gpu doesn't have tensor cores.
Just for others that runinto this too. Copying the dlls into the capi folder does only seem to work if the capi folder is also listed inside the Windows environment PATH variable. (tried to work around it by using:
import os
import site
os.environ['PATH'] += site.getsitepackages()[0]+'/Lib/site-packages/onnxruntime/capi'
but that did not work.
moving libraries to a different folder and explicitly loading them using:
import os
import site
# Import scripts folder
from ctypes import WinDLL
path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
WinDLL(path+'cublas64_11.dll')
WinDLL(path+'cudart64_110.dll')
WinDLL(path+'cudnn64_8.dll')
WinDLL(path+'cudnn_cnn_infer64_8.dll')
WinDLL(path+'cudnn_ops_infer64_8.dll')
WinDLL(path+'cufft64_10.dll')
WinDLL(path+'cufftw64_10.dll')
(loaded cuda only libraries here) worked for me. :)
Can you shed some light on what runtime (onnxruntime, onnxruntime-gpu and onnxruntime-directml) is need for which provider?
When I install 'onnxruntime-gpu' and use provider=1, I get:
(dpir still seems to work) When I uninstall 'onnxruntime-gpu', install 'onnxruntime' and use provider=1, no error appears. When I uninstall 'onnxruntime', install 'onnxruntime-directml' and use provider=1, no error appears. When use provider=3, no error appears and the processing is a lot faster.
-> Is there a downside of using onnxruntime-directml ?
Also using provider=0 and provider=1 doesn't seem to make a difference. From the looks of it cpu&gpu usage both times is the same. Would be nice if you could shed some light on what is used when. :)