Open sikhness opened 5 months ago
Hey @sikhness, similar question to your other issue. Can you help me understand what sort of workloads are you trying to run with NPU acceleration? Understanding this use case will help us better prioritize this request as we explore AI/ML workloads.
Hey @fady-azmy-msft! Similar to my other question, I did list out a few AI related workloads that would benefit from GPU Acceleration from vendor specific graphics APIs.
Some of those same AI workloads can also benefit from offloading that work to the NPU now and here is an example of Ryzen AI which provides instructions on how to install, prep and run your AI models on the NPU on Windows. It would be very beneficial to be able to containerize these applications for isolation & portability benefits and still leverage the hardware.
Got it. Tagging @NAWhitehead to look into this. He's driving the Windows containers GPU scenarios, and this is related.
I think you should get the class GUID for "Neural processors", try passing it as a --device class/the_guid
, copy the drivers from the FileRepository into the container, and then see if the NPU works. Odds are low but crazier things have been true.
Currently, only a subset of devices can be passed through, with GPUs being one of them (albeit limited with only DirectX based frameworks). With the rise and push of NPU/IPUs built into processors, it would be beneficial to provide NPU acceleration in Windows Containers to be able to containerize our AI/ML workloads.