a-ghorbani / pocketpal-ai

An app that brings language models directly to your phone.
MIT License
1.14k stars 84 forks source link

[Feat]: quantized KV cache and flash attention #79

Open mseri opened 2 weeks ago

mseri commented 2 weeks ago

Description Flash attention and quantized kv stores are both supported by llama.cpp.

These features allow for much larger contexts with drastically reduced memory footprints. These could be quite convenient for the limited resources on the phone.

Quantized kv cache, with q8, means half of the memory for the context with barely any effect on the quality (q4 is 1/4 memory but you notice degradation in my tests).

The feature could be implemented adding two optional parameters: a checkbox for flash attention (required for the KV quantization) and a dropdown to select a quantization for both the k and v store, f16 — (current) default, f8 and f4.