QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

openai api形式进行多轮调用问题 #1149

Closed aaronliu7 closed 6 months ago

aaronliu7 commented 6 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

Traceback (most recent call last): File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_requestor.py", line 413, in handle_error_response error_data = resp["error"] KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "draft_api2.py", line 32, in print("GPT回答: ", send_message(message)) File "draft_api2.py", line 16, in send_message response = openai.ChatCompletion.create( File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 155, in create response, , api_key = requestor.request( File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( File "/home/lzl/anaconda3/envs/llm/lib/python3.8/site-packages/openai/api_requestor.py", line 415, in handle_error_response raise error.APIError( openai.error.APIError: Invalid response object from API: '{"detail":"Invalid request"}' (HTTP response code was 400)

期望行为 | Expected Behavior

使用openai api形式进行多轮调用

复现方法 | Steps To Reproduce

import openai
openai.api_base = "http://localhost:8099/v1"
openai.api_key = "none"

dialogue_history =  [  
    {'role':'system', 'content':'你是个友好的聊天机器人。'}
]

def send_message(message,model="Qwen"):

    dialogue_history.append(message)

    response = openai.ChatCompletion.create(
        model=model,
        messages=dialogue_history,
        stream=False,
        stop=[] 
    )

    reply = response.choices[0].message['content']

    return reply

while True:
    user_input = input("输入: ")
    message = {"role": "user", "content": user_input}
    print("回答: ", send_message(message))

运行环境 | Environment

OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.37.2
PyTorch: 2.2.1
CUDA: 12.1

备注 | Anything else?

No response