Open shijie-wu opened 1 year ago
+1 facing the same error here, and I've found another mismatch between the tokenizer's vocab size and the config's vocab size:
assert model.config.vocab_size == tokenizer.vocab_size # failed (51200 != 50257)
Same here!.
The problem seems to affect all the other tokenizers (I have tried the 2B size) and models (I have tested the multi variation) too.
From the huggingface documentation setting the padding token equals to the eos token by hand for the GPT2Tokenizer seems to be a common practice.
I had to set the value into the GenerationConfig too
from transformers import GenerationConfig
generation_config = GenerationConfig(
temperature=0.6,
top_p=0.95,
repetition_penalty=1.15,
)
generation_config.pad_token_id = tokenizer.eos_token_id
What are your results setting the values by hand?
For me the quality of the generated code seems fine.
Hi,
Based on the paper, codegen is based on gpt2 tokenizer and training scheme, i.e.
bos_token
,eos_token
, andpad_token
are"<eodoftext>"
. However, it seems the HF model config includes the incorrectbos_token_id
andpad_token_id
(eos_token_id
is fixed by https://github.com/salesforce/CodeGen/issues/32).way to reproduce the issue & expected behavior