flashinfer-ai / flashinfer

FlashInfer: Kernel Library for LLM Serving
https://flashinfer.ai
Apache License 2.0
760 stars 64 forks source link

feat: add `use_tensor_cores` option to decode kernels to accelerate GQA #317

Closed yzh119 closed 2 weeks ago

yzh119 commented 2 weeks ago

The tensor-cores accelerated GQA in our blog post was not enabled by default (user need to use Prefill kernels/wrappers for decode to get such acceleration).

In this PR we add an option use_tensor_cores to decode operators/wrappers, and user can select whether to use tensor_cores for acceleration depending on use cases.

Not that our prefill kernels are compiled for all possible group sizes (#301 ), but decode kernels are not. So if user wants to use general group size, it's encouraged to set use_tensor_cores=True.