intel / intel-xpu-backend-for-triton

OpenAI Triton backend for Intel® GPUs
MIT License
122 stars 33 forks source link

Make the llvm-target branch use the Triton plugin infrastructure #170

Closed etiotto closed 1 month ago

etiotto commented 8 months ago

The code in the llvm-target branch is a fork of the OpenAI Triton code with modifications in several files. The structure of the project mirrors the structure of the AMD port. This work item objective is to make the project use the Triton plugin infrastructure. The work is similar to what is required for the AMD port, which is also a fork, so we should determine whether we can use the same mechanism.

Jianhui-Li commented 8 months ago

AMD actually supports both ways: started as a fork, and also implemented plugin.

https://github.com/ROCmSoftwarePlatform/triton/blob/triton-mlir/python/triton/third_party/hip/rocm_backend_for_triton.cc

chengjunlu commented 8 months ago

Will keep the both ways as well for Intel.

When Intel XPU backend is built as a plug-in, We'd have to re-configure the CMake build configuration in the root CMakeLists.txt of Intel XPU backend repo. That is because some targets has the same name but different source codes. Like: target TritonGPUToLLVM. Unlike to AMD, the Intel related codes are only in Intel XPU backend while the public Triton has the AMD codes.

The Intel XPU backend LLVM will support the latest Triton plugin infrastructure. (Triton version > 2.2). For the Triton version <= 2.2, we still use the SPIRV target in the main branch now.

Is that Ok?

etiotto commented 8 months ago

Will keep the both ways as well for Intel.

Yes agree.

When Intel XPU backend is built as a plug-in, We'd have to re-configure the CMake build configuration in the root CMakeLists.txt of Intel XPU backend repo. That is because some targets has the same name but different source codes. Like: target TritonGPUToLLVM. Unlike to AMD, the Intel related codes are only in Intel XPU backend while the public Triton has the AMD codes.

OK. We need to be able to change any file that OpenAI has in our fork (until OpeNAI let us upstream changes in common files). If you post a PR we can iterate on the actual code changes.

The Intel XPU backend LLVM will support the latest Triton plugin infrastructure. (Triton version > 2.2). For the Triton version <= 2.2, we still use the SPIRV target in the main branch now.

Is that Ok?

Yes. I think that is fine. The LLVM target can be used for new version of Triton only.

aregm commented 7 months ago

@chengjunlu - what's the status here?

chengjunlu commented 7 months ago

@chengjunlu - what's the status here?

The final code change will be ready in this week.

It supports to build the Intel XPU backend llvm-target branch as a 3p plug-in as well as to build the Intel XPU backend along.

The PR is under reviewing: https://github.com/intel/intel-xpu-backend-for-triton/pull/180

vlad-penkin commented 4 months ago

@etiotto is this ticket still valid?