NVIDIA / cutlass

CUDA Templates for Linear Algebra Subroutines
Other
5.49k stars 931 forks source link

[QST] Where is FlashAttention-2 CUTLASS kernel #1838

Open yoon5862 opened 2 weeks ago

yoon5862 commented 2 weeks ago

Hello, I'am study fused_multi_head_attention example in CUTLASS. In CUTLASS 3.5.1 README.md, it said flash attention 2 kernel is in CUTLASS. But in fused_multi_head attention, it is based on Meta/xFormer. I can not find flash attention2 CUTLASS kernels. Is fused_multi_head attention and flash attention is same?

thakkarV commented 2 weeks ago

xFormers is different from FA2 and FA3. FA2 and FA3 are downstream of CUTLASS in @tridao 's FA repo itself : https://github.com/Dao-AILab/flash-attention. Both FA2 and FA3 are written using CUTLASS.

yoon5862 commented 2 weeks ago

Thank you for reply. flash attention concentrate at A100, and H100 kernels. I'm curious flash attention kernel is efficient with Jetson AGX or RTX series. If it is not efficient, it need to be tuned for inference efficiently?

Thank you.

thakkarV commented 2 weeks ago

FA2 should work quite well on all Sm8x GPUs which includes RTX 3000 and RTX 4000 series GPUs. I suspect it works well on Jetson Orin too since that is Sm8x as well. YMMW so you should benchmark to confirm. If it is not near peak util, it should be quite easy to tune. Although for inference I suspect you want flash decode instead?

yoon5862 commented 2 weeks ago

Thank you for reply. I will do it in Sm8x series GPUs.