microsoft / LLMLingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.42k stars 241 forks source link

Feature(LLMLingua): add examples #11

Closed iofu728 closed 10 months ago

iofu728 commented 10 months ago
iofu728 commented 10 months ago

Additionally, the example notebooks are now supported to be displayed in Google Colab.