vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.65k stars 4.65k forks source link

Update default max_num_batch_tokens for chunked prefill to 2048 #10544

Open mgoin opened 1 day ago

mgoin commented 1 day ago

We have seen that 512 is a very conservative value, especially once on H100. Increasing this is 2048 is still a bit conservative but a definite improvement in processing large prefills or throughput.

github-actions[bot] commented 1 day ago

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

🚀

mergify[bot] commented 12 hours ago

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @mgoin.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork