neuralmagic / nm-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://nm-vllm.readthedocs.io
Other
251 stars 10 forks source link

Add health probe #385

Closed joerunde closed 3 months ago

joerunde commented 3 months ago

So the server won't delete itself 😉

Will ensure shutdown when the backend terminates, if /health probes are used to manage container lifecycle. We could consider internal heartbeats for this as well, but that's probably out of scope for initial work here.

github-actions[bot] commented 3 months ago

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

🚀

joerunde commented 3 months ago

closing for new PR instead