xorbitsai / inference

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
https://inference.readthedocs.io
Apache License 2.0
4.01k stars 329 forks source link

致命错误:bad revision 'HEAD' #1540

Open wu-xiaochen opened 2 months ago

wu-xiaochen commented 2 months ago

Note that the issue tracker is NOT the place for general support. I have used “xinference-local --host 0.0.0.0 --port 9997” to start Xinference server, but it shows "致命错误:bad revision 'HEAD'".And the shell stoped at "Starting Xinference at endpoint: http://0.0.0.0:9997"

qinxuye commented 2 months ago

Can you open the UI, normally the command line will stop at starting xxx.

wu-xiaochen commented 2 months ago

Can you open the UI, normally the command line will stop at starting xxx.

yes,i can open the UI and fastAPI pages, but when i want to test the api, it shows "500:Internal Server Error, content-length: 21 content-type: text/plain; charset=utf-8 date: Fri,24 May 2024 05:32:41 GMT server: uvicorn "