This PR is a follow up on #32 and adds the ability to use quantized K- and V-cache in the flash attention (FA) kernel. Q4_0, Q4_1 and Q8_0 are supported as cache quantization types. It is trivial to add additional types, but the implementation is templated, so number of template instantiations grows quadraticly with the number of supported quantization types, so I decided to settle for these 3 types for now.
Performance is slightly lower than fp16 cache (see graph below), so main use case is KV-cache size reduction for very large context lengths. Still, unlike mainline llama.cpp, performance remains strictly above no-FA.
The graph below shows PP performance as a function of context length (logarithmic scale) for Gemma-2-2b quantized with Q4_K_S on a Ryzen-7950X CPU.
This PR is a follow up on #32 and adds the ability to use quantized K- and V-cache in the flash attention (FA) kernel.
Q4_0
,Q4_1
andQ8_0
are supported as cache quantization types. It is trivial to add additional types, but the implementation is templated, so number of template instantiations grows quadraticly with the number of supported quantization types, so I decided to settle for these 3 types for now.Performance is slightly lower than
fp16
cache (see graph below), so main use case is KV-cache size reduction for very large context lengths. Still, unlike mainlinellama.cpp
, performance remains strictly above no-FA.The graph below shows PP performance as a function of context length (logarithmic scale) for Gemma-2-2b quantized with
Q4_K_S
on a Ryzen-7950X CPU.