microsoft / LLMLingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.42k stars 241 forks source link

Fix (LLMLingua): Resolved a potential ZeroDivisionError caused by the actual compression ratio. #54

Closed davidberenstein1957 closed 7 months ago

davidberenstein1957 commented 7 months ago