microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.25k stars 2.87k forks source link

[Build] Missing DLL onnxruntime_providers_cuda.dll. Where do you get this dll? #21256

Closed TannerCypret closed 2 months ago

TannerCypret commented 3 months ago

Describe the issue

I have a c++ project and I can run with Onnx models just fine in CPU mode, but I want the option to change to GPU depending on the model. I have the following code to optionally set Onnx to execute on the GPU.

OrtCUDAProviderOptions options; options.device_id = 1; ort_session_options.AppendExecutionProvider_CUDA(options);

However, I get this error:

struct Ort::Exception: C:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1426 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:project\bin\onnxruntime_providers_cuda.dll"

I've been following tutorials to get this to work. I installed CUDA Tool Kit 11.8 and cudnn 8.9 and copied the cudnn dlls to the Cuda folder. I added the location of Cuda to my path. This page mentions that I need to build onnxruntime:

https://onnxruntime.ai/docs/build/eps.html#cuda

I found the dll that I was missing on this page: https://github.com/microsoft/onnxruntime/releases. I downloaded the onnxruntime-win-x64-gpu-1.18.1.zip folder and the dll that I needed was in there. I added that to my project but it still crashes due to a different error. Is it essential that I build onnxruntime myself in order to generate the dll that's specific to my computer?

I tried downloading it directly because running this command: .\build.bat --use_cuda --cudnn_home <cudnn home path> --cuda_home <cuda home path> gives me an error about visual studio not having CUDA integration:

Visual Studio 17 2022 given toolset cuda=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin\ cannot detect Visual Studio integration files in path C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8/bin/extras/visual_studio_integration/MSBuildExtensions

Is it required that I build the onnxruntime project and maybe that's the step I'm missing? I've tried running the Cuda tool kit installer multiple times but it doesn't seem to add the visual studio integration that's required for me to build it. I just want to run my model in GPU mode as an option, so advice welcome if you know what im doing wrong or if I'm not on the right path.

Urgency

No response

Target platform

windows

Build script

visual studio

Error / output

onnxruntime_providers_cuda.dll missing

Visual Studio Version

2019

GCC / Compiler Version

No response

tianleiwu commented 3 months ago

Regarding to an error about visual studio not having CUDA integration: Use --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8" instead of --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin".

srikanthshettigar commented 3 months ago

Check if below steps help.

  1. Update your c++ project properties to use the header and libs provided in onnxruntime-win-x64-gpu-1.18.1.zip and build c++ project.

  2. Now while running you C++ project exe make sure that it finds the a. DLLs ( one of them is onnxruntime_providers_cuda.dll) provided in onnxruntime-win-x64-gpu-1.18.1.zip b. CUDA c. cuDNN

TannerCypret commented 3 months ago

Check if below steps help.

  1. Update your c++ project properties to use the header and libs provided in onnxruntime-win-x64-gpu-1.18.1.zip and build c++ project.
  2. Now while running you C++ project exe make sure that it finds the a. DLLs ( one of them is onnxruntime_providers_cuda.dll) provided in onnxruntime-win-x64-gpu-1.18.1.zip b. CUDA c. cuDNN

I manually added the headers to the project and included them. I added the dlls to the folder where it was looking for onnxruntime_providers_cuda.dll. The code still breaks on the last line of this block:

OrtCUDAProviderOptions options; options.device_id = 1; ort_session_options.AppendExecutionProvider_CUDA(options);

I get this error: Unhandled exception at 0x00007FFC3B3CF6FE (ucrtbase.dll)

I was able to build the onnxruntime project after using advice from @tianleiwu comment, so I have CUDA and cuDNN installed.

TannerCypret commented 3 months ago

I got it working. Not sure why, but after copying the dlls over again it finally worked. I have this loop to print all execution providers:

auto providers = Ort::GetAvailableProviders(); for (auto provider : providers) { cout << provider << endl; }

and it lists:

TensorrtExecutionProvider CUDAExecutionProvider CPUExecutionProvider

Is there a way to print which execution provider I am using, not just the ones that are available? I think it's using the GPU because that's what I am setting it to use, but for example in python you can run this command:

import onnxruntime as ort print( ort.get_device() )

And that will print cpu or gpu. Is there a similar command to that in c++ so that I can validate that I am doing this correctly?

tianleiwu commented 3 months ago

Is there a similar command to that in c++ so that I can validate that I am doing this correctly?

You can use profiling tool: https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html#gpu-profiling It will show the device of each node in JSON file. Some node might run in CPU, some nodes in GPU.