dme-compunet / YoloSharp

🚀 Use YOLO11 in real-time for object detection tasks, with edge performance ⚡️ powered by ONNX-Runtime.
https://www.nuget.org/packages/YoloSharp
GNU Affero General Public License v3.0
290 stars 59 forks source link

How to use GPU? #78

Open ckmstydy opened 1 month ago

ckmstydy commented 1 month ago

I have installed CUDA=12.2 and cuDNN,=9.3, but I still get errors when loading the model using YoloPredictor(my onnxRuntime version is 1.19.1):

[ErrorCode:RuntimeException]D:al workl1\slonnxruntimelcorelsessionlprovider bridge ort.cc:1637onnxruntime::ProviderLibrary::Get [ONNXRuntimeError : 1 : FAIL :LoadLibrary failed with error 126 "" when trying to load"D:\OrderTaking\PupilDet-onnx-gpuPupilDetlbin\Debuginet8.0-windcws\runtimes\win-x64\native\onnxruntime providers cuda.dll

error error2

var predictor = new YoloPredictor(“/model/best.onnx”);

ErrorGz commented 1 month ago

I haven't used this code for a while, so I'm concerned my response might be incomplete. First, you clone the dme-compunet/YoloV8 code into your project, then in the NuGet package manager, uninstall Microsoft.ML.OnnxRuntime and install Microsoft.ML.OnnxRuntime.Gpu. 微信图片_20241009000404

Second, add the zlibwapi.dll file to your project and set it to 'Copy to Output Directory'. 微信图片_20241009000415

Third, when you call YoloV8Builder().WithSessionOptions, add parameters with SessionOptions options = new SessionOptions(); and specify options.AppendExecutionProvider_xxxxxxx(). Below is an example using DirectX12, not the GPU example.

            SessionOptions options = new SessionOptions();            
            options.AppendExecutionProvider_DML();
            var predictor2 = new YoloV8Builder().WithSessionOptions(options).UseOnnxModel(new BinarySelector("onnx/yolov8n-seg.onnx")).Build();
            return predictor2;"
dme-compunet commented 1 month ago

@ckmstydy Try adding the cuda bin folder to Path environment variable

ckmstydy commented 1 month ago

I haven't used this code for a while, so I'm concerned my response might be incomplete. First, you clone the dme-compunet/YoloV8 code into your project, then in the NuGet package manager, uninstall Microsoft.ML.OnnxRuntime and install Microsoft.ML.OnnxRuntime.Gpu. 微信图片_20241009000404

Second, add the zlibwapi.dll file to your project and set it to 'Copy to Output Directory'. 微信图片_20241009000415

Third, when you call YoloV8Builder().WithSessionOptions, add parameters with SessionOptions options = new SessionOptions(); and specify options.AppendExecutionProvider_xxxxxxx(). Below is an example using DirectX12, not the GPU example.

            SessionOptions options = new SessionOptions();            
            options.AppendExecutionProvider_DML();
            var predictor2 = new YoloV8Builder().WithSessionOptions(options).UseOnnxModel(new BinarySelector("onnx/yolov8n-seg.onnx")).Build();
            return predictor2;"

Thanks, but unfortunately it didn't work for me

ckmstydy commented 1 month ago

@ckmstydy Try adding the cuda bin folder to Path environment variable

Have already added. And “nvcc -V ” works correctly. 3