lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Apache License 2.0
36.29k stars 4.47k forks source link

When the type of context in the incoming messages is text, an error occurs. #3017

Open icowan opened 6 months ago

icowan commented 6 months ago

When the type of context in the incoming messages is text, an error occurs.

API: /v1/chat/completions

request

{
    "max_tokens": 0,
    "model": "qwen-72b-chat-int4",
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "text": "Assistant is a large language model trained by Meta.",
                    "type": "text"
                }
            ]
        }
    ],
    "functions": null,
    "function_call": null,
    "temperature": 0,
    "top_p": 0,
    "n": 0,
    "stream": true,
    "stop": [
        "\nObservation:","Observation:"
    ],
    "presence_penalty": 0,
    "frequency_penalty": 0,
    "logit_bias": null,
    "user": "",
    "response_format": {
        "type": ""
    },
    "seed": null,
    "tools": null,
    "tool_choice": null
}

ERROR

ERROR | stderr |   File "/app/fastchat/serve/openai_api_server.py", line 420, in create_chat_completion
ERROR | stderr |     gen_params = await get_gen_params(
ERROR | stderr |   File "/app/fastchat/serve/openai_api_server.py", line 329, in get_gen_params
ERROR | stderr |     prompt = conv.get_prompt()
ERROR | stderr |   File "/app/fastchat/conversation.py", line 168, in get_prompt
ERROR | stderr |     ret += role + "\n" + message + self.sep + "\n"
ERROR | stderr | TypeError: can only concatenate str (not "tuple") to str
maziyarpanahi commented 3 months ago

If you want to use the OpenAI compatible serving inside chat-ui by Hugging Face you will get this error. This doesn't happen with other local LLM served as OpenAI API (LMStudio, TGI, vLLM, etc.)

TypeError: can only concatenate str (not "tuple") to str