microsoft / LLMLingua

[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.
https://llmlingua.com/
MIT License
4.63k stars 254 forks source link

[Bug]: LLMLingua-2 uses wrong special tokens by default #181

Open cornzz opened 2 months ago

cornzz commented 2 months ago

Describe the bug

The TokenClfDataset is initialized without a model_name parameter and therefore defaults to bert-base-multilingual-cased, meaning that incorrect special tokens are used in llmlingua-2, i.e.

    if "bert-base-multilingual-cased" in model_name:
            self.cls_token = "[CLS]"
            self.sep_token = "[SEP]"
            self.unk_token = "[UNK]"
            self.pad_token = "[PAD]"
            self.mask_token = "[MASK]"

instead of

    elif "xlm-roberta-large" in model_name:
            self.bos_token = "<s>"
            self.eos_token = "</s>"
            self.sep_token = "</s>"
            self.cls_token = "<s>"
            self.unk_token = "<unk>"
            self.pad_token = "<pad>"
            self.mask_token = "<mask>"

The tokenizer simply treats these wrong special tokens (bos/eos/pad) as unknown tokens, I don't know what effect that has exactly. The difference in compression is not very significant, but there is some difference.

Steps to reproduce

Add print(tokenized_text) in line 57 in utils.py to see the wrong tokens used for the xlm-robert-large based compression model.

Expected Behavior

The correct special tokens should be used for the respective compression model.

Additional Information

iofu728 commented 3 weeks ago

Hi @cornzz, thanks for your feedback. I will review this PR ASAP.