Transformers have emerged as the backbone of large language models (LLMs).However, generation remains inefficient due to the need to store in memory acache of key-value representations for past tokens, whose size scales linearlywith the input sequence length and batch size. As a solution, we proposeDynamic Memory Compression (DMC), a method for on-line key-value cachecompression at inference time. Most importantly, the model learns to applydifferent compression rates in different heads and layers. We retrofitpre-trained LLMs such as Llama 2 (7B, 13B and 70B) into DMC Transformers,achieving up to ~3.7x throughput increase in auto-regressive inference on aNVIDIA H100 GPU. DMC is applied via continued pre-training on a negligiblepercentage of the original data without adding any extra parameters. We findthat DMC preserves the original downstream performance with up to 4x cachecompression, outperforming up-trained grouped-query attention (GQA). GQA andDMC can be even combined to obtain compounded gains. As a result DMC fitslonger contexts and larger batches within any given memory budget.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)