Open andylzming opened 2 weeks ago
not following issue template.
vllm version is too old. try disabling custom reduce if it is enabled and you are using PCIE cards.
not following issue template.
vllm version is too old. try disabling custom reduce if it is enabled and you are using PCIE cards.
The VLLM version has been upgraded to 0.5.1, but the issue still persists.
(xinference) [root@gpu-server ~]# pip list | grep vllm
vllm 0.5.1
vllm-flash-attn 2.5.9
vllm-nccl-cu12 2.18.1.0.4.0
can you please follow the issue template? what's your driver version? what's your card? did you use multiple cards? how did you start vllm? and so on. why does xinference show custom-qwen25-32-instruct? how to actually reproduce?
Model Series
Qwen2.5
What are the models used?
Qwen2.5-32B-Instruct
What is the scenario where the problem happened?
Xinference
Is this a known issue?
Information about environment
System Info / 系統信息
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
The command used to start Xinference / 用以启动 xinference 的命令
Reproduction / 复现过程
Expected behavior / 期待表现
正常推理结果。
Log output
Description
System Info / 系統信息
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
The command used to start Xinference / 用以启动 xinference 的命令
Reproduction / 复现过程
Expected behavior / 期待表现
正常推理结果。