Open kwjlhh opened 5 days ago
Qwen2.5
Qwen2.5-72B-Instruct
deployment with vllm, tool calling with vllm
Ubuntu 22.04 Python 3.10.14 vllm 0.6.2
log is fine
有概率出现如下结构: "message": { "role": "assistant", "content": "\n{\"name\": \"text_classification\", \"arguments\": {\"classifications\": [{\"classification\": \"test\", \"score\": 0.9563}, {\"classification\": \test2\", \"score\": 0.0321}, {\"classification\": \"test3\", \"score\": 0.0102}]}\n", "tool_calls": [] }, 模型代码文件都是9.25号拉的,应该是晚于之前issues所提到的更新 tokenizer_config.json的时间点
please provide the input and the sampling parameters so that we can try reproducing the issue.
Model Series
Qwen2.5
What are the models used?
Qwen2.5-72B-Instruct
What is the scenario where the problem happened?
deployment with vllm, tool calling with vllm
Is this a known issue?
Information about environment
Ubuntu 22.04 Python 3.10.14 vllm 0.6.2
Log output
Description
有概率出现如下结构: "message": { "role": "assistant", "content": "\n{\"name\": \"text_classification\", \"arguments\": {\"classifications\": [{\"classification\": \"test\", \"score\": 0.9563}, {\"classification\": \test2\", \"score\": 0.0321}, {\"classification\": \"test3\", \"score\": 0.0102}]}\n ",
"tool_calls": []
},
模型代码文件都是9.25号拉的,应该是晚于之前issues所提到的更新 tokenizer_config.json的时间点