issues
search
microsoft
/
LLMLingua
To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.18k
stars
222
forks
source link
Prereleased(LLMLinguia): fix the chuck issue and prepare for v0.2.2
#130
Closed
iofu728
closed
3 months ago
iofu728
commented
3 months ago
fix the chuck issue;
prepare v0.2.2;