intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
144 stars 44 forks source link

Make sure `ext_oneapi_get_default_context` doesn't broke runtime on windows #2742

Closed anmyachev closed 6 days ago

anmyachev commented 1 week ago

Part of #2478 (to reduce diff)

These are quite stable changes, we can merge it without CI on Windows. @gshimansky if you don't mind.

gshimansky commented 1 week ago

I found how pytorch implements the same functionality on Linux and Windows. You may want to take a look. https://github.com/pytorch/pytorch/blob/main/c10/xpu/XPUFunctions.cpp#L59-L72

anmyachev commented 1 week ago

I found how pytorch implements the same functionality on Linux and Windows. You may want to take a look. https://github.com/pytorch/pytorch/blob/main/c10/xpu/XPUFunctions.cpp#L59-L72

This is interesting, but it will also require us to implement enumDevices function https://github.com/pytorch/pytorch/blob/0c7c5d78faa61245700ba6f2d0c237019090f684/c10/xpu/XPUFunctions.cpp#L30. Let's leave the current version for now, since it works, and open an issue on how this can be improved.

anmyachev commented 6 days ago

@alexbaden ready for review

anmyachev commented 6 days ago

https://github.com/intel/intel-xpu-backend-for-triton/issues/2757