neuralmagic / nm-vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://nm-vllm.readthedocs.io
Other
251 stars 10 forks source link

Frontend mp flag #384

Closed joerunde closed 3 months ago

joerunde commented 3 months ago

@robertgshaw2-neuralmagic

This adds the --disable-frontend-multiprocessing flag and should also correctly pick up embeddings models to disable the multiprocessing here. (Also some unrelated formatting changes)

The backend stuff is wrapped up in a context manager that handles the process startup and shutdown at exit as well, so that we don't have to muck around much in the existing server lifecycle code

github-actions[bot] commented 3 months ago

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

🚀

robertgshaw2-neuralmagic commented 3 months ago

Thanks!