salesforce / CodeGen

CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
Apache License 2.0
4.94k stars 381 forks source link

inconsistent `eos_token_id` and `pad_token_id` in model & tokenizer config #50

Open shijie-wu opened 1 year ago

shijie-wu commented 1 year ago

Hi,

Based on the paper, codegen is based on gpt2 tokenizer and training scheme, i.e. bos_token, eos_token, and pad_token are "<eodoftext>". However, it seems the HF model config includes the incorrect bos_token_id and pad_token_id (eos_token_id is fixed by https://github.com/salesforce/CodeGen/issues/32).

way to reproduce the issue & expected behavior

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")

assert model.config.eos_token_id == tokenizer.eos_token_id # pass (50256 == 50256)
assert model.config.bos_token_id == tokenizer.bos_token_id # failed (1 != 50256)
assert model.config.pad_token_id == tokenizer.pad_token_id # failed (None != 50256)
OmerHefets commented 1 year ago

+1 facing the same error here, and I've found another mismatch between the tokenizer's vocab size and the config's vocab size:

assert model.config.vocab_size == tokenizer.vocab_size # failed (51200 != 50257)
Nan-Do commented 1 year ago

Same here!.

The problem seems to affect all the other tokenizers (I have tried the 2B size) and models (I have tested the multi variation) too. From the huggingface documentation setting the padding token equals to the eos token by hand for the GPT2Tokenizer seems to be a common practice.
I had to set the value into the GenerationConfig too

from transformers import GenerationConfig
generation_config = GenerationConfig(
      temperature=0.6,
      top_p=0.95,
      repetition_penalty=1.15,
)
generation_config.pad_token_id = tokenizer.eos_token_id

What are your results setting the values by hand?
For me the quality of the generated code seems fine.