AIoT-MLSys-Lab / Efficient-LLMs-Survey

[TMLR 2024] Efficient Large Language Models: A Survey
https://arxiv.org/abs/2312.03863
1.02k stars 85 forks source link

Add 1 paper about KV-Cache optimization #34

Closed shadowpa0327 closed 3 months ago

shadowpa0327 commented 3 months ago

Added one recent paper on KV-Cache optimization: "Palu: Compressing KV-Cache with Low-Rank Projection".

SUSTechBruce commented 3 months ago

Good job.