To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
Add structured_compress_prompt(), enabling users to define whether a section should be compressed and the degree of compression to apply. This allows for varied compression strategies based on full context, enhancing the compressor's efficiency and adaptability.
Add
structured_compress_prompt()
, enabling users to define whether a section should be compressed and the degree of compression to apply. This allows for varied compression strategies based on full context, enhancing the compressor's efficiency and adaptability.