THUDM / ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Other
15.68k stars 1.85k forks source link

[BUG/Help] openai_api.py调用报错404 #586

Open Tanorika opened 10 months ago

Tanorika commented 10 months ago

Is there an existing issue for this?

Current Behavior

API控制台: Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 7/7 [02:18<00:00, 19.81s/it] INFO: Started server process [4312] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) INFO: 127.0.0.1:53040 - "POST /v1/chat/completions/chat/completions HTTP/1.1" 404 Not Found

调用代码: import openai if name == "main": openai.api_base = "http://localhost:8000/v1" openai.api_key = "none" for chunk in openai.ChatCompletion.create( model="chatglm2-6b", messages=[ {"role": "user", "content": "你好"} ], stream=True ): if hasattr(chunk.choices[0].delta, "content"): print(chunk.choices[0].delta.content, end="", flush=True)

报错信息: Traceback (most recent call last): File "D:\Software\Py310\lib\site-packages\openai\api_requestor.py", line 413, in handle_error_response error_data = resp["error"] KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "d:\Users\Desktop\import openasdai.py", line 5, in for chunk in openai.ChatCompletion.create( File "D:\Software\Py310\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "D:\Software\Py310\lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 155, in create response, , api_key = requestor.request( File "D:\Software\Py310\lib\site-packages\openai\api_requestor.py", line 299, in request resp, got_stream = self._interpret_response(result, stream) File "D:\Software\Py310\lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "D:\Software\Py310\lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line raise self.handle_error_response( File "D:\Software\Py310\lib\site-packages\openai\api_requestor.py", line 415, in handle_error_response raise error.APIError( openai.error.APIError: Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)

Expected Behavior

No response

Steps To Reproduce

python运行openai_api.py,使用官方文档中的调用方式报错notfound

Environment

- OS:Windows10
- Python:3.10
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :Yes

Anything else?

No response

windances commented 10 months ago

no such issue in win10 WSL env