xorbitsai / inference

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
https://inference.readthedocs.io
Apache License 2.0
5.31k stars 430 forks source link

内网无法启动内置的模型,使用自定义的方式定义了一个yi-vl-chat的模型,运行的时候报错 #1545

Open jiaolongxue opened 5 months ago

jiaolongxue commented 5 months ago

因为环境是内网,无法直接通过指定内置模型来启动yi-vl-chat,所以先下载的yi-vl-chat的gguf格式,然后通过自定义模型的方式创建了一个模型,但是在进行问答的时候,服务返回了错误, image

日志如下,拉取的最新的docker镜像


2024-05-26 19:01:44,750 xinference.api.restful_api 1 ERROR    Chat completion stream got an
error: [address=0.0.0.0:38377, pid=407] can only concatenate str (not "list") to str
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/xinference/api/restful_api.py", line 1378, i
n stream_results
    iterator = await model.chat(
  File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/context.py", line 227, in se
nd
    return self._process_result_message(result)
  File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/context.py", line 102, in _p
rocess_result_message
    raise message.as_instanceof_cause()
  File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/pool.py", line 659, in send
    result = await self._run_coro(message.message_id, coro)
  File "/opt/conda/lib/python3.10/site-packages/xoscar/backends/pool.py", line 370, in _run_
coro
    return await coro
  File "/opt/conda/lib/python3.10/site-packages/xoscar/api.py", line 384, in __on_receive__
   402,193
codingl2k1 commented 5 months ago

有没有更完整的日志?

cattlesheep commented 5 months ago

解决了吗?我也出现了这种问题。

github-actions[bot] commented 3 months ago

This issue is stale because it has been open for 7 days with no activity.

docShen commented 1 month ago

解决了吗,我也一样的问题。