lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Apache License 2.0
36.36k stars 4.47k forks source link

Fastchat supports ChatGLM3-6b? Currently, it seems not supported. 400 Bad Request #2743

Open SmileLollipop opened 9 months ago

SmileLollipop commented 9 months ago

python -m fastchat.serve.openai_api_server --host localhost --port 8000 INFO: Started server process [17636] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit) INFO: 127.0.0.1:12208 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request INFO: 127.0.0.1:12215 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request INFO: 127.0.0.1:12219 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request INFO: 127.0.0.1:12222 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

infwinston commented 9 months ago

It's supported by https://github.com/lm-sys/FastChat/pull/2622. can you try this command?

python3 -m fastchat.serve.cli --model-path THUDM/chatglm3-6b