microsoft / Windows-Containers

Welcome to our Windows Containers GitHub community! Ask questions, report bugs, and suggest features -- let's work together.
MIT License
426 stars 64 forks source link

NPU Acceleration #505

Open sikhness opened 5 months ago

sikhness commented 5 months ago

Currently, only a subset of devices can be passed through, with GPUs being one of them (albeit limited with only DirectX based frameworks). With the rise and push of NPU/IPUs built into processors, it would be beneficial to provide NPU acceleration in Windows Containers to be able to containerize our AI/ML workloads.

fady-azmy-msft commented 5 months ago

Hey @sikhness, similar question to your other issue. Can you help me understand what sort of workloads are you trying to run with NPU acceleration? Understanding this use case will help us better prioritize this request as we explore AI/ML workloads.

sikhness commented 5 months ago

Hey @fady-azmy-msft! Similar to my other question, I did list out a few AI related workloads that would benefit from GPU Acceleration from vendor specific graphics APIs.

Some of those same AI workloads can also benefit from offloading that work to the NPU now and here is an example of Ryzen AI which provides instructions on how to install, prep and run your AI models on the NPU on Windows. It would be very beneficial to be able to containerize these applications for isolation & portability benefits and still leverage the hardware.

fady-azmy-msft commented 4 months ago

Got it. Tagging @NAWhitehead to look into this. He's driving the Windows containers GPU scenarios, and this is related.

doctorpangloss commented 4 months ago

I think you should get the class GUID for "Neural processors", try passing it as a --device class/the_guid, copy the drivers from the FileRepository into the container, and then see if the NPU works. Odds are low but crazier things have been true.