microsoft / LLMLingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.18k stars 222 forks source link

Fix(LLMLingua): fix the assert issue due to tokenization #139

Closed SiyunZhao closed 2 months ago

SiyunZhao commented 2 months ago