neuralmagic / nm-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://nm-vllm.readthedocs.io
Other
251 stars 10 forks source link

Use random port for backend #390

Closed joerunde closed 3 months ago

joerunde commented 3 months ago

Picks an open port to use and boots both the client and server with it

github-actions[bot] commented 3 months ago

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

🚀

joerunde commented 3 months ago

The formatter is run, looks like it just did an odd thing or two 🤷

joerunde commented 3 months ago

(gonna merge this so I can go put the health checks on top of it too)