Closed Grey4sh closed 2 months ago
you may need to add fim releated tokens into eos_tokens_ids
I still enconter the same problem after modified config.json and generation_config.json with TGI-2.3.0
cat generation_config.json
{
"bos_token_id": 151643,
"eos_token_id": [
151643,
151662
],
"max_new_tokens": 2048,
"transformers_version": "4.45.0.dev0"
}
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": [
151643,
151662
],
I still enconter the same problem after modified config.json and generation_config.json with TGI-2.3.0
cat generation_config.json { "bos_token_id": 151643, "eos_token_id": [ 151643, 151662 ], "max_new_tokens": 2048, "transformers_version": "4.45.0.dev0" }
{ "architectures": [ "Qwen2ForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 151643, "eos_token_id": [ 151643, 151662 ],
I think you need to add "<|fim_pad|>" to special token.
you can try like this.
you can try like this.
Thank you for your detailed response. However, my current requirement is to deploy server-side inference, and I have tried modifying the local model's config, but it hasn't worked.
This issue is being closed due to no response for more than 1 day.