NickSwardh / YoloDotNet

YoloDotNet - A C# .NET 8.0 project for Classification, Object Detection, OBB Detection, Segmentation and Pose Estimation in both images and videos.
GNU General Public License v3.0
158 stars 28 forks source link

Microsoft.ML.OnnxRuntime.OnnxRuntimeException: #9

Closed halukmy closed 4 months ago

halukmy commented 5 months ago

Microsoft.ML.OnnxRuntime.OnnxRuntimeException: '[ErrorCode:RuntimeException] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1209 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\Users\cyberpark\Documents\YoloDotNet-master\ConsoleDemo\bin\Debug\net8.0\runtimes\win-x64\native\onnxruntime_providers_cuda.dll"

_session = useCuda ? new InferenceSession(onnxModel, SessionOptions.MakeSessionOptionWithCudaProvider(gpuId)) : new InferenceSession(onnxModel);

OnnxModel = _session.GetOnnxProperties();

_parallelOptions = new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount }; _useCuda = useCuda;

in this case i got this problem after going to video progress section, can you assist?

niclasastrom commented 5 months ago

I can't get the GPU-variant to work. I've tried to follow the instructions, but I run into the same type of issue as Halukmy. Windows 11 Pro. Latest CUDA.

freefer commented 5 months ago

Install CUDA and compatible versions of CUDNN https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html set the bin directory after cudnn decompression as the system environment variable. Due to the dependency of the project on the ONNX runtime version 1.17.1, I recommend using CUDA 11.8.0 and cuDNN 8.5.0.96 or 8.7.0.84 Download zlib.dll to assist in CUDA\bin directory http://www.winimage.com/zLibDll/zlib123dllx64.zip

NickSwardh commented 5 months ago

Yes, as feefer mentioned, CUDA version and CuDNN version must be compatible with the Onnx runtime version.

Here's a quick installation guide for others who might encounter the same problem.

Let's start by checking the ONNX-runtime CUDA compability chart (at the time I wrote this): https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

image Here we can see that the default CUDA version ORT (Onnx Runtime) is CUDA v11.8 so that's the CUDA version to download. We can also see that we need cuDNN versions up to 8.9, which is compatible with CUDA v11.8, so that's the cuDNN version we want.

Installation

  1. Install CUDA v11.8
  2. Open the CUDA bin-folder. E.g C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
  3. Unzip the cuDNN zip-archive
  4. Copy the cuDNN DLL's from the bin folder into the CUDA bin folderimage

Set environment variable

Copy the folder path to your CUDA bin folder. E.g C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin and add it to your systems PATH environment variable image

That's it :)

niclasastrom commented 5 months ago

Thanks a lot! No more runtime errors!

niclasastrom commented 5 months ago

Guys, I have a follow-up question (maybe OT, maybe not):

Running the ConsoleDemo app no longer generates any runtime errors. However, performance using the GPU is worse than using the CPU. I have an RTX 4070, running Windows 11 Pro. The latest OS- and NVidia driver updates are installed.

I expected higher throughput when using the GPU, but I could be wrong. What performance can I expect, CPU vs GPU?

For example, the classification test took 130ms on the CPU and 572ms on the GPU. Do you know if this is expected?

I added a couple of lines to measure compute time:

var stopWatch=new Stopwatch();
stopWatch.Start();
List<Classification> results = yolo.RunClassification(image, 3); // Get top 5 classifications. Default = 1
stopWatch.Stop();
Console.WriteLine("Elapsed time: "+stopWatch.ElapsedMilliseconds);

Thanks for your input. If this follow-up question doesn't fit the topic, please forgive me and I will try to file my question somewhere else.