intel / linux-npu-driver

Intel® NPU (Neural Processing Unit) Driver
MIT License
190 stars 18 forks source link

openVINO not able to inference model with NPU #48

Closed azarulfahmin closed 2 months ago

azarulfahmin commented 2 months ago

I have issues with NPU for dlstreamer in docker container. The build with NPU driver installation is complete without issue. But when I run the dlstreamer pipeline, there was error and the pipeline unable to finish the execution. Seems like the gstreamer can detect the NPU, but on certain part of the inference execution it got conflict. The error is(from gsteamer log as in attachment):

"/home/dlstreamer/dlstreamer/src/monolithic/gst/inference_elements/base/inference_singleton.cpp(181): acquire_inference_instance (): /GstPipeline:pipeline0/GstGvaDetect:detection:

Failed to construct OpenVINOImageInference

      Exception from src/inference/src/cpp/core.cpp:104:

Exception from src/inference/src/dev/plugin.cpp:54:

Exception from src/plugins/intel_npu/src/plugin/src/plugin.cpp:622:

Exception from src/plugins/intel_npu/src/plugin/src/compiled_model.cpp:61:

Check 'result == ZE_RESULT_SUCCESS' failed at src/plugins/intel_npu/src/compiler/src/zero_compiler_in_driver.cpp:738:

Failed to compile network. L0 createGraph result: ZE_RESULT_ERROR_UNSUPPORTED_VERSION, code 0x78000002. ERROR! MAPPED_INFERENCE_VERSION is NOT compatible with the ELF Expected: 6.1.0 vs received: 7.0.0"

Try with openVINO benchmark samples script:

root@user-O-E-M:/opt/intel/openvino_2024.2.0/samples/python/benchmark/throughput_benchmark# python throughput_benchmark.py /home/pipeline-server/models/object_detection/yolov5s/FP16-INT8/yolov5s.xml NPU [ INFO ] OpenVINO: [ INFO ] Build ................................. 2024.2.0-15519-5c0f38f83f6-releases/2024/2 Traceback (most recent call last): File "/opt/intel/openvino_2024.2.0/samples/python/benchmark/throughput_benchmark/throughput_benchmark.py", line 89, in main() File "/opt/intel/openvino_2024.2.0/samples/python/benchmark/throughput_benchmark/throughput_benchmark.py", line 47, in main compiled_model = core.compile_model(sys.argv[1], device_name, tput) File "/opt/intel/openvino_2024.2.0/python/openvino/runtime/ie_api.py", line 543, in compile_model super().compile_model(model, device_name, {} if config is None else config), RuntimeError: Exception from src/inference/src/cpp/core.cpp:121: Exception from src/inference/src/dev/plugin.cpp:59: Exception from src/plugins/intel_npu/src/plugin/src/plugin.cpp:622: Exception from src/plugins/intel_npu/src/plugin/src/compiled_model.cpp:61: Check 'result == ZE_RESULT_SUCCESS' failed at src/plugins/intel_npu/src/compiler/src/zero_compiler_in_driver.cpp:738: Failed to compile network. L0 createGraph result: ZE_RESULT_ERROR_UNSUPPORTED_VERSION, code 0x78000002. ERROR! MAPPED_INFERENCE_VERSION is NOT compatible with the ELF Expected: 6.1.0 vs received: 7.0.0

Below is the stack combination that I tried:

Host OS: Ubuntu 22.04 and Ubuntu 24.04 Kernel: Ubuntu 22.04 host: 6.8.0-40-generic Ubuntu 24.04 host: 6.8.0-41-generic/6.10~/6.11~ Docker Image: intel/dlstreamer:2024.1.1-ubuntu22 and intel, 2024.1.0-ubuntu22 and intel/dlstreamer:2024.0.2-ubuntu22 Intel NPU Driver: v1.1.0, v1.2.0, v1.5.0 and v1.6.0 OpenVINO version: 2024.2.0,2024.2.0 and 2024.1.0 Model: Yolov5s

I also did try this troubleshooting suggestion: cmake -B build -S . cmake --install build/ --component fw-npu --prefix / But i just try to add this command when I building the driver. Not really sure is it correct way.

Here is my previous question on the similar issue at other repo https://github.com/intel/intel-npu-acceleration-library/issues/64#issuecomment-2319725265

azarulfahmin commented 2 months ago

Resolved. by reinstall level zero with file package. Then build and install again linux npu driver for both host and docker.