linkedin / Liger-Kernel

Efficient Triton Kernels for LLM Training
BSD 2-Clause "Simplified" License
2.9k stars 139 forks source link

[AMD] Implement Flash Attention in Triton to enable transformers to run with Flash Attention on AMD GPUs. #126

Open ByronHsu opened 2 weeks ago

ByronHsu commented 2 weeks ago

🚀 The feature, motivation and pitch

The official implementation of flash attention is in CUDA, so in AMD GPUs, users cannot easily use flash attention on transformers to training LLM. With the supports, we can unlock many exciting use cases on AMD. The code is already there at https://triton-lang.org/main/getting-started/tutorials/06-fused-attention.html.

Another option is to use flex-attn from PyTorch team, which uses torch.compile to optimize on top of existing handwritten triton kernels

Alternatives

No response

Additional context

No response

helloworld1 commented 2 weeks ago

The FA provided by https://github.com/Dao-AILab/flash-attention has only MI200 or MI300 GPUs. With Trition 3.0, the kernel can work on a much broad range of AMD GPUs. Tested kernels on AMD 7000 series working great.

thevasudevgupta commented 2 weeks ago

I implemented flash attention v1 as well in triton. Feel free to copy/adapt from here: https://github.com/thevasudevgupta/gpt-triton/blob/6a12b71e4e332a2077e6b7f742f97c7160fe0242/kernels.py#L376 (my repo is MIT license!!)

I might plan to work on v2/v3 version in future. Will let you know when I finish.

unclemusclez commented 2 weeks ago

Working Navi 31 / 7900 / gfx1100 support: https://github.com/ROCm/flash-attention/tree/howiejay/navi_support