vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.07k stars 3.82k forks source link

[RFC]: Enhancing LoRA Management for Production Environments in vLLM #6275

Open Jeffwan opened 1 month ago

Jeffwan commented 1 month ago

This RFC proposes improvements to the management of Low-Rank Adaptation (LoRA) in vLLM to make it more suitable for production environments. This proposal aims to address several pain points observed in the current implementation. Feedback and discussions are welcome, and we hope to gather input and refine the proposal based on community insights.

Motivation.

This RFC proposes improvements to the management of Low-Rank Adaptation (LoRA) in vLLM to make it more suitable for production environments. This proposal aims to address several pain points observed in the current implementation. Feedback and discussions are welcome, and we hope to gather input and refine the proposal based on community insights.

Motivation

LoRA integration in production environments faces several challenges that need to be addressed to ensure smooth and efficient deployment and management. The main issues observed include:

  1. Visibility of LoRA Information: Currently, the relationship between LoRA and base models is not exposed clearly by the engine. The /v1/models endpoint does not display this information. Related issues: https://github.com/vllm-project/vllm/issues/6274

  2. Dynamic Loading and Unloading: LoRA adapters cannot be dynamically loaded or unloaded after the server has started. Related issues: https://github.com/vllm-project/vllm/issues/3308 https://github.com/vllm-project/vllm/issues/4068 https://github.com/vllm-project/vllm/issues/5491

  3. Remote Registry Support: LoRA adapters cannot be pulled from remote model repositories during runtime, making it cumbersome to manage artifacts locally. Related issues: https://github.com/vllm-project/vllm/issues/6233 https://github.com/vllm-project/vllm/issues/6231

  4. Observability: There is a lack of metrics and observability enhancements related to LoRA, making it difficult to monitor and manage.

  5. Cluster level Support: Information about LoRA is not easily accessible to resource managers, hindering support for service discovery, load balancing, and scheduling in cluster environments. Related issues: https://github.com/vllm-project/vllm/issues/4873

Proposed Change.

1. Support Dynamically Loading or Unloading LoRA Adapters

To enhance flexibility and manageability, we propose introducing the ability to dynamically load and unload LoRA adapters at runtime.

2. Load LoRA Adapters from Remote Storage

Enabling LoRA adapters to be loaded from remote storage during runtime will simplify artifact management and deployment processes. The technical detail could be adding get_adapter_absolute_path ,

3. Build Better LoRA Model Lineage

To improve the visibility and management of LoRA models, we propose building a more robust model lineage metadata. This system will:

4. Lora Observability enhancement

Improving observability by adding metrics specific to LoRA will help in better monitoring and management. Proposed metrics include:

5. Control Plane support(service discovery, load balancing, scheduling) for Loras

Since vLLM community focus more on the inference engine, the cluster level features would be a separate design I am working on in Kubernetes WG-Serving. I will link back to this issue shortly.

Feedback Period.

No response

CC List.

@simon-mo @Yard1

Note: Please help tag the right person who worked in this area.

Any Other Things.

No response

simon-mo commented 1 month ago

I'm in favor of all these! Please also make sure it is well documented.

Yard1 commented 1 month ago

Yes, this all makes sense. Let's make sure to ensure that performance doesn't degrade too much with loading from remote storage.

codybum commented 1 month ago

Yes, the issue you have noted prevent us from running vLLM. I would also include the ability to apply (merge) more than one adapter simultaneously to a single request.

I am looking forward to these features making itself into vLLM.

lizzzcai commented 1 month ago

Hi Jeff,

Thank you for sharing the RFC on Lora. I noticed my feature request was included, which is appreciated. Want to check whether there are plans to implement the load/unload API for the base model? Thanks in advance for your attention to this matter.

llama-shepard commented 1 month ago

I would love to add the following feature in this RFC.

LOAD ADAPTERS FROM S3 SUPPORTED STORAGES

LoRAX has this feature https://loraexchange.ai/models/adapters/#s3

This brings a new challenge in vLLM. Including type of the source (huggingface or s3). This is handled in LoRAX by providing the default 'adapter-source'.

It needs to support storages which supports S3 schema (like Cloudflare R2) https://github.com/predibase/lorax/blob/main/server/lorax_server/utils/sources/s3.py

--env "R2_ACCOUNT_ID={r2_account_id}" --env "AWS_ACCESS_KEY_ID={aws_access_key_id}" --env "AWS_SECRET_ACCESS_KEY={aws_secret_access_key}"

S3_ENDPOINT_URL = os.environ.get("S3_ENDPOINT_URL", None)
R2_ACCOUNT_ID = os.environ.get("R2_ACCOUNT_ID", None)

if R2_ACCOUNT_ID:
    s3 = boto3.resource("s3", endpoint_url=f"https://{R2_ACCOUNT_ID}.r2.cloudflarestorage.com", config=config)
    return s3.Bucket(bucket_name)
elif S3_ENDPOINT_URL:
    s3 = boto3.resource("s3", endpoint_url=f"{S3_ENDPOINT_URL}", config=config)
    return s3.Bucket(bucket_name)
else:
    s3 = boto3.resource("s3", config=config)
    return s3.Bucket(bucket_name)