microsoft / LLMLingua

To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.48k stars 251 forks source link

[Feature Request]: Docker service support #132

Closed eav-solution closed 5 months ago

eav-solution commented 5 months ago

Is your feature request related to a problem? Please describe.

No response

Describe the solution you'd like

No response

Additional context

Hello author, Could you please consider add docker service support ?

iofu728 commented 5 months ago

Hi @eav-solution, thanks for your advice.

However, LLMLingua is a very lightweight pip package. If you need to use it, you can simply install it using pip install llmlingua. No additional environment dependencies are required.

If you encounter any issues with the installation, please provide the relevant context, and we can help you resolve them.