microsoft / LLMLingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.18k stars 222 forks source link

fix wrong keyword argument in LLMLingua2.ipynb #140

Closed gmaliar closed 1 month ago

gmaliar commented 2 months ago

What does this PR do?

Fixes a typo in LLMLingua2.ipynb example based on a change in the API

Before submitting

Who can review?

@SiyunZhao

iofu728 commented 2 months ago

Thanks for your help.