microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.1k stars 2.84k forks source link

Onnxruntime inference with Integrated GPU Failed #6755

Open RocskyLu opened 3 years ago

RocskyLu commented 3 years ago

Describe the bug I have built the onnxruntime with openvino following the instructions here https://www.onnxruntime.ai/docs/how-to/build.html#openvino. However, when I run the demo, it does not work on the integrated GPU. I do not know what's wrong with the built verion. The demo can be run successfully with cpu and cuda.

System information

To Reproduce Just follow the instructions here https://www.onnxruntime.ai/docs/how-to/build.html#openvino.

            Ort::SessionOptions session_options_vino;
            session_options_vino.EnableCpuMemArena();
            session_options_vino.EnableMemPattern();
            session_options_vino.DisableProfiling();
            session_options_vino.SetInterOpNumThreads(0);
            session_options_vino.SetIntraOpNumThreads(0);
            session_options_vino.SetExecutionMode(ORT_SEQUENTIAL);
            session_options_vino.SetGraphOptimizationLevel(ORT_ENABLE_ALL);
            //OrtOpenVINOProviderOptions options;
            //options.device_type = "CPU_FP32";
            //options.enable_vpu_fast_compile = 0;
            //options.device_id = "";
            //options.num_of_threads = 8;
            //session_options_vino.AppendExecutionProvider_OpenVINO(options);
    OrtSessionOptionsAppendExecutionProvider_OpenVINO(session_options_vino, "GPU_FP32");
            session = Ort::Session(env, model_data, model_len, session_options_vino);
            cout << "load open vino successfully" << endl;
RocskyLu commented 3 years ago

build the lib with .\build.bat --config RelWithDebInfo --use_openvino GPU_FP32 --build_shared_lib

MaajidKhan commented 3 years ago

@RocskyLu can you confirm your windows machine has Intel Graphics (iGPU) ? If yes, you should be able to run your sample using OpenVINO Execution Provider on Intel's iGPU. Please use the latest onnxruntime 1.7.0 version and the latest OpenVINO 2021.3 binary package.

Build Instructions at this page: https://www.onnxruntime.ai/docs/how-to/build.html#openvino .\build.bat --config Release --use_openvino GPU_FP32 --build_shared_lib --parallel

Note: --parallel flag is optional flag, it will fasten your overall build time, since it will make use of all your cores during build time.

Yes your build instructions are correct. With RelWithDebInfo build, there's a bug. The fix is available in ONNXRuntime master branch and will be available with ONNXRuntime 1.8.0 release in another 2 weeks.

But for now, I would suggest you to use ONNXRuntime 1.7.0 and specifiy the config Option as --config Release while building.

If you face any issues during build time or during running the application. please paste the error logs or the screenshots here. I will help you to fix it.