llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
Apache License 2.0
69
stars
17
forks
source link
Api server blocked when one request is in-process #137
Need more test for this issue