issues
search
microsoft
/
LLMLingua
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.27k
stars
228
forks
source link
Added unittest for structured_compress_prompt and fixed bugs
#95
Closed
SiyunZhao
closed
4 months ago
iofu728
commented
4 months ago
Todo:
Switching to a customizable parameter;
Add json compress prompt function;
Add documents
Todo: