vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.2k stars 4.57k forks source link

[Feature]: Enhance integration with advanced LB/gateways with better load/cost reporting and LoRA management #10086

Open liu-cong opened 1 week ago

liu-cong commented 1 week ago

🚀 The feature, motivation and pitch

There are huge potential in more advanced load balancing strategies tailored for the unique characteristics of AI inference, compared to basic strategies such as round robin. llm instance gateway is one of such efforts and is demonstrating huge performance wins. vLLM can demonstrate leadership in this space by providing better integration with advanced LBs/gateways.

This doc captures the overall requirements for model servers to better support the llm instance gateway. Luckily vLLM already has lots of features/metrics that enable more efficient load balancing such as exposing the KVCacheUtilization metric.

This is a high level breakdown of the feature requests:

Dynamic LoRA Load/unload

Load/cost reporting in metrics

Load/cost reporting in response headers in ORCA format

Open Request Cost Aggregation (ORCA) is a light-weight open protocol for reporting load/cost info to LBs and is already integrated with Envoy and gRPC.

This feature will be controlled by a new engine argument --orca_formats (default [], meaning ORCA is disabled; available values are one or more of[BIN, TEXT, JSON]). If the feature is enabled, vLLM will report metrics defined in the doc as HTTP response headers in the OpenAI compatible APIs.

Out of band load/cost reporting API in ORCA format

vLLM will expose a light weight API to report the same metrics in ORCA format. This enables LBs to proactively probe the API and get real time load information. This is a long term vision and more details will be shared later.

cc @simon-mo

Alternatives

No response

Additional context

No response

Before submitting a new issue...

ahg-g commented 2 days ago

/cc

simon-mo commented 1 day ago

This sounds great! In general in vLLM we want to ensure we are compatible with open format for load balancing and observability. This helps people actually run vLLM in production.

As long as the overhead in the default case is minimal, I'm in full support.