Closed FuturePrayer closed 3 weeks ago
启动命令:
(venv) C:\Windows\System32>uvicorn chatglm_cpp.openai_api:app --host 127.0.0.1 --port 4523
I ran it on m2 ultra and encountered the same error. The running command was "MODEL=./chatglm.cpp/models/chatglm4-ggml.bin uvicorn chatglm_cpp.openai_api:app --host 0.0.0.0 --port 8848"
Fixed by #309. Will release a new version soon.
I ran it on m2 ultra and encountered the same error. The running command was "MODEL=./chatglm.cpp/models/chatglm4-ggml.bin uvicorn chatglm_cpp.openai_api:app --host 0.0.0.0 --port 8848"
我查看了#309的修改方式,直接修改了site-packages中的代码,把encode_messages修改为apply_chat_template,不再报错了
I ran it on m2 ultra and encountered the same error. The running command was "MODEL=./chatglm.cpp/models/chatglm4-ggml.bin uvicorn chatglm_cpp.openai_api:app --host 0.0.0.0 --port 8848"
我查看了#309的修改方式,直接修改了site-packages中的代码,把encode_messages修改为apply_chat_template,不再报错了
我暂时也用相同的方式解决了,多谢!
已经发布新版本了,可以更新下 python 包,先关了
模型:glm4-9b-chat q4_0量化 接口:/v1/chat/completions 请求报文:
后台报错日志: