Open cengwong opened 3 years ago
Microsoft.ML.OnnxRuntime.Gpu packages ORT with CUDA EP. "Microdoft.ML.OnnxRuntime.MKLML packages ORT with MKLML support. Both don't include tensorrt or mkldnn (DNNL). If you want to use any of these, you'll have to build the binary yourself. See https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/DNNL-ExecutionProvider.md and https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/TensorRT-ExecutionProvider.md. You said you want to use GPU for accelerating detection. For this, the Gpu pkg didn't work for you?
@pranavsharma Thanks for you reply. I am a totally freshman, actually I don't know how to create a session which will work in GPU. And I don't see any tutorial that show me how to do it, here is the code I am trying to reproduce https://github.com/tenglike1997/onnxruntime-projects/blob/master/Windows/onnx_mobilenet/ort_test/ort_test.cpp. It had include tensorrt and mkldnn. Can you please send me a link of how to just use Gpu pkg for accelerating detection? I got a 1080 in my computer now. Again, Thank you so much!
You need only this line. Get rid of line 53 and 54.
Thanks, I am doing it now. And could I get rid of #include
Yes.
Ort::Env env{ ORT_LOGGING_LEVEL_WARNING, "test" }; I can not create an env now... Anyway, thanks for you reply, it do help me a lot. Now I am trying to fix the issue.
Describe the bug I had installed "Microsoft.ML.OnnxRuntime.Gpu" and "Microdoft.ML.OnnxRuntime.MKLML" in visual studio 2019 through NuGet, but I still couldn't include the header file "tensorrt_provider_factory.h" and "mkldnn_provider_factory.h". How can fix it? Thanks.
Here is the thing. I had sucessfully detecting target with my onnx model, ssd-mobilenetv2, in visual studio 2019. And similarly, I just installed "Microsoft.ML.OnnxRuntime" and "Microdoft.ML.OnnxRuntime.MKLML" through NuGet, Now I want to use GPU for accelerating detection. How can I do? Where can I get any tutorial?Thanks a lot.
System information