openvinotoolkit / openvino_notebooks

📚 Jupyter notebook tutorials for OpenVINO™
Apache License 2.0
2.48k stars 819 forks source link

How to use NPU while compiling hello-world #2575

Open bonihaniboni opened 17 hours ago

bonihaniboni commented 17 hours ago

https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/hello-world/hello-world.ipynb

I try to compile this model (mobelinet-v3-tf/FP32) using Intel LNL NPU, so i changed code.

Select Inference Device

device = "NPU" compiled_model = core.compile_model(model=model, device_name=device)

I understnad that i cannot use NPU for Interference, but i wonder why i cannot use npu while compiling model. And i also wonder that any general code which can initialize and utilize Intel NPU (like Intel IGCL API)

Thank you very much

brmarkus commented 17 hours ago

Can you use "https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/hello-npu" instead? Do you see it working?

bonihaniboni commented 2 hours ago

I also tried that sample, but i found that only "!benchmark_app -m {model_path} -d NPU -hint latency" command use NPU. For compiling I saw only CPU worked alone, even though i satisfied all prerequisite options. I thought NPU would be used when compiling too. Am I wrong?