dme-compunet / YoloV8

Integrate YOLOv8 into your C# project for a variety of real-time tasks including object detection, instance segmentation, pose estimation and more, using ONNX Runtime.
https://www.nuget.org/packages/YoloV8
GNU Affero General Public License v3.0
260 stars 52 forks source link

How to use GPU? #78

Open ckmstydy opened 2 weeks ago

ckmstydy commented 2 weeks ago

I have installed CUDA=12.2 and cuDNN,=9.3, but I still get errors when loading the model using YoloPredictor(my onnxRuntime version is 1.19.1):

[ErrorCode:RuntimeException]D:al workl1\slonnxruntimelcorelsessionlprovider bridge ort.cc:1637onnxruntime::ProviderLibrary::Get [ONNXRuntimeError : 1 : FAIL :LoadLibrary failed with error 126 "" when trying to load"D:\OrderTaking\PupilDet-onnx-gpuPupilDetlbin\Debuginet8.0-windcws\runtimes\win-x64\native\onnxruntime providers cuda.dll

error error2

var predictor = new YoloPredictor(“/model/best.onnx”);

ErrorGz commented 2 weeks ago

I haven't used this code for a while, so I'm concerned my response might be incomplete. First, you clone the dme-compunet/YoloV8 code into your project, then in the NuGet package manager, uninstall Microsoft.ML.OnnxRuntime and install Microsoft.ML.OnnxRuntime.Gpu. 微信图片_20241009000404

Second, add the zlibwapi.dll file to your project and set it to 'Copy to Output Directory'. 微信图片_20241009000415

Third, when you call YoloV8Builder().WithSessionOptions, add parameters with SessionOptions options = new SessionOptions(); and specify options.AppendExecutionProvider_xxxxxxx(). Below is an example using DirectX12, not the GPU example.

            SessionOptions options = new SessionOptions();            
            options.AppendExecutionProvider_DML();
            var predictor2 = new YoloV8Builder().WithSessionOptions(options).UseOnnxModel(new BinarySelector("onnx/yolov8n-seg.onnx")).Build();
            return predictor2;"
dme-compunet commented 2 weeks ago

@ckmstydy Try adding the cuda bin folder to Path environment variable

ckmstydy commented 2 weeks ago

I haven't used this code for a while, so I'm concerned my response might be incomplete. First, you clone the dme-compunet/YoloV8 code into your project, then in the NuGet package manager, uninstall Microsoft.ML.OnnxRuntime and install Microsoft.ML.OnnxRuntime.Gpu. 微信图片_20241009000404

Second, add the zlibwapi.dll file to your project and set it to 'Copy to Output Directory'. 微信图片_20241009000415

Third, when you call YoloV8Builder().WithSessionOptions, add parameters with SessionOptions options = new SessionOptions(); and specify options.AppendExecutionProvider_xxxxxxx(). Below is an example using DirectX12, not the GPU example.

            SessionOptions options = new SessionOptions();            
            options.AppendExecutionProvider_DML();
            var predictor2 = new YoloV8Builder().WithSessionOptions(options).UseOnnxModel(new BinarySelector("onnx/yolov8n-seg.onnx")).Build();
            return predictor2;"

Thanks, but unfortunately it didn't work for me

ckmstydy commented 2 weeks ago

@ckmstydy Try adding the cuda bin folder to Path environment variable

Have already added. And “nvcc -V ” works correctly. 3