Open bonihaniboni opened 17 hours ago
Can you use "https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/hello-npu" instead? Do you see it working?
I also tried that sample, but i found that only "!benchmark_app -m {model_path} -d NPU -hint latency" command use NPU. For compiling I saw only CPU worked alone, even though i satisfied all prerequisite options. I thought NPU would be used when compiling too. Am I wrong?
https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/hello-world/hello-world.ipynb
I try to compile this model (mobelinet-v3-tf/FP32) using Intel LNL NPU, so i changed code.
Select Inference Device
device = "NPU" compiled_model = core.compile_model(model=model, device_name=device)
I understnad that i cannot use NPU for Interference, but i wonder why i cannot use npu while compiling model. And i also wonder that any general code which can initialize and utilize Intel NPU (like Intel IGCL API)
Thank you very much