ludwig-ai / ludwig

Low-code framework for building custom LLMs, neural networks, and other AI models
http://ludwig.ai
Apache License 2.0
11.19k stars 1.19k forks source link

Revert: Simplify how we set pad token and pad token ID for huggingfac… #3897

Closed arnavgarg1 closed 10 months ago

arnavgarg1 commented 10 months ago

This revert addresses a critical issue introduced in the previous pull request (#3735), where the simplification of pad token configuration inadvertently led to the PAD token being mapped to the same token ID as the UNK (unknown) token. This mapping anomaly resulted in quality degradation during fine-tuning.

The problem surfaced as the model, instead of learning to predict an EOS (end-of-sequence) token to indicate stopping at the end of a sequence, learned to predict an UNK token at the end of sequences. This hindered the model's ability to recognize when to halt during generation, impacting the overall performance and quality of the fine-tuned model.

This reversion aims to restore the previous pad token setup and rectify the unintended mapping issue, ensuring that the model correctly learns to predict EOS tokens for proper sequence termination during fine-tuning.

Demonstration of the bug that was introduced using Llama-2

Current:

>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
>>> tokenizer
LlamaTokenizerFast(name_or_path='meta-llama/Llama-2-7b-hf', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
    0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
>>> tokenizer.pad_token = "[PAD]"
>>> tokenizer
LlamaTokenizerFast(name_or_path='meta-llama/Llama-2-7b-hf', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
    0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
>>> tokenizer.pad_token_id
0
>>> str2idx = tokenizer.vocab
>>> idx2str = {v: k for k, v in str2idx.items()}
>>> idx2str[0]
'<unk>'
>>> tokenizer.unk_token_id
0

The issue here is that we're mapping the new PAD token to the same token ID as the UNK token, which is the last token that's passed into the model's forward pass.

This is what used to happen before (and what this revert will go back to)

>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
>>> tokenizer.pad_token = tokenizer.eos_token
>>> tokenizer.pad_token_id = tokenizer.eos_token_id
>>> tokenizer.pad_token
'</s>'
>>> tokenizer.pad_token_id
2
>>> tokenizer.eos_token
'</s>'
>>> tokenizer.eos_token_id
2
>>> tokenizer.unk_token_id
0
>>> tokenizer.unk_token
'<unk>'
>>> tokenizer
LlamaTokenizerFast(name_or_path='meta-llama/Llama-2-7b-hf', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '</s>'}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
    0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
    2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
github-actions[bot] commented 10 months ago

Unit Test Results

  6 files  ±0    6 suites  ±0   14m 19s :stopwatch: +9s 12 tests ±0    9 :heavy_check_mark: ±0    3 :zzz: ±0  0 :x: ±0  60 runs  ±0  42 :heavy_check_mark: ±0  18 :zzz: ±0  0 :x: ±0 

Results for commit 224bebf7. ± Comparison against base commit 3203cc16.