vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.8k stars 4.68k forks source link

[Minor] Revert change in offline inference example #10545

Closed WoosukKwon closed 4 days ago

WoosukKwon commented 4 days ago

This PR moves the change in #10392 to a new file offline_inference_cli.py, and revokes the changes in offline_inference.py. The purpose is to keep the offline_inference.py example as simple as possible, so that users with no background on vLLM can immediately understand it.

github-actions[bot] commented 4 days ago

👋 Hi! Thank you for contributing to the vLLM project. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

🚀