Closed ZhexuanZhou closed 5 months ago
Hi @ZhexuanZhou, thanks for your interest in LLMLingua.
Currently, there are two methods to preserve structured data.
force_tokens
when calling compress_prompt
:
compressed_prompt = llm_lingua.compress_prompt(prompt, rate=0.33, force_tokens=['|', "-"])
structured_prompt = """<llmlingua, compress=False>|</llmlingua><llmlingua, rate=0.4> Method</llmlingua><llmlingua, compress=False>|</llmlingua>"""
compressed_prompt = llm_lingua.structured_compress_prompt(structured_prompt, instruction="", question="", rate=0.5)
Describe the issue
Did you test the QA performance on the compressed Markdown table? In my case, the compression breaks the structure of the Markdown table, leading to the LLM not answering the question properly. While the answer is as expected before.
My task is RAG, do you have any advice for compressing documents with tables?