Open synergiator opened 7 months ago
Hi @synergiator, thank you very much for your interest in LLMLingua and for sharing the detailed experimental results. They are very helpful to us.
You can find the recovery function at https://github.com/microsoft/LLMLingua/blob/main/llmlingua/prompt_compressor.py#L922. However, I suspect the increase in the no answer ratio in these cases is due to the loss of necessary information. I'm curious whether you used LongLLMLingua or LLMLingua; if it was LLMLingua, the loss of valuable information might be more significant, especially with a high compression ratio of about 10x-20x.
Nevertheless, we greatly appreciate your experiments and conclusions.
As mentioned in the paper, key concepts might get omitted either corrupted by the compression, in a way that the GPT can't process the compressed prompt.
You mention also there is an approach to optimize around this issue; could you share details on the corresponding configuration options in the Python implementation?
In the attached image, I've tested the GPT confidence degradation according to compression effects on the _qaspere subset of the LongBench benchmark.
Wrong answers/no answer possible: