OpenCSGs / llm-inference

llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
Apache License 2.0
69 stars 17 forks source link

No default value for "timeout" if missing "batch_wait_timeout_s: 0" in yaml config #48

Open depenglee1707 opened 7 months ago

depenglee1707 commented 7 months ago

Cause timeout exception when missing batch_wait_timeout_s: 0 must be a bug ~_~

depenglee1707 commented 7 months ago

Cannot reproduce! @SeanHH86 could you paste the configuration here?