Open ZzEeKkAa opened 1 week ago
Got you. We will fix the torch-xpu-ops issue with priority.
Okay, I've created a tiny PR that fixes the issue: https://github.com/ZzEeKkAa/pytorch/pull/2 We need to add it as a patch
diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index f9eb6fe2b3832..6bfca37264a62 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -1049,6 +1049,15 @@ if(USE_XPU)
if(NOT _exitcode EQUAL 0)
message(FATAL_ERROR "Fail to checkout ${TORCH_XPU_OPS_REPO_URL} to ${TORCH_XPU_OPS_COMMIT}")
endif()
+ execute_process(
+ COMMAND curl -Ls https://github.com/intel/torch-xpu-ops/pull/1017.diff
+ OUTPUT_PIPE
+ COMMAND git apply -
+ WORKING_DIRECTORY ${TORCH_XPU_OPS_DIR}
+ RESULT_VARIABLE _exitcode)
+ if(NOT _exitcode EQUAL 0)
+ message(FATAL_ERROR "Fail to apply patch from PR#1017")
+ endif()
set(TORCH_XPU_OPS_INCLUDE_DIRS
${TORCH_SRC_DIR}/csrc/api
PR is merged into torch-xpu-ops https://github.com/intel/torch-xpu-ops/pull/1017
If you try to build pytorch using deep-learning-essentials or any 2025 compiler - it will fail due to https://github.com/intel/torch-xpu-ops/issues/1027 .
As a workaround I've created torch-xpu-ops fork, that contains fixes that works only with 2025 compiler based on the pined commit of pytorch (https://github.com/intel/intel-xpu-backend-for-triton/blob/main/.github/pins/pytorch-upstream.txt) -> pinned commit of torch-xpu-ops (https://github.com/pytorch/pytorch/blob/0efa590d435d2b4aefcbad9014dd5fa75dcf8405/third_party/xpu.txt): https://github.com/ZzEeKkAa/torch-xpu-ops/pull/1/files
And corresponding commit in pytorch: https://github.com/pytorch/pytorch/commit/0b3b4a7ab93d85a2073e14ce2dca3ed71522acde