Open gnpinkert opened 1 month ago
vLLM allocates more kv cache space than required for a single max model length because it is an LLM server that handles batching multiple requests together for better throughput. We have gpu_memory_utilization
as a highlighted parameter to directly control what percentage of the GPU memory is allocated to control this. The max_model_len is actually the minimum of required space to serve a possible single request, the rest of the kv cache is to allow vLLM to serve multiple requests in parallel.
If you want to run multiple small models on the same GPU, simply start each engine with a gpu_memory_utilization according to their portion i.e.
from vllm import LLM
model1 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)
model2 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)
model3 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)
print(model1.generate("Hello!"))
"""
Processed prompts: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 13.57it/s, est. speed input: 40.76 toks/s, output: 217.33 toks/s]
[RequestOutput(request_id=0, prompt='Hello!', prompt_token_ids=[2, 31414, 328], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text=' That is my dad. He was a wautdig with me when I was', token_ids=(280, 16, 127, 4252, 4, 91, 21, 10, 885, 4255, 20098, 19, 162, 77, 38, 21), cumulative_logprob=None, logprobs=None, finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1722878319.5231462, last_token_time=1722878319.5231462, first_scheduled_time=1722878319.5450768, first_token_time=1722878319.563497, time_in_queue=0.021930694580078125, finished_time=1722878319.618218), lora_request=None)]
"""
@mgoin I have a related concern to @gnpinkert. In your example of creating multiple instances of LLM
for the same model, I don't believe the copies will actually end up with the same memory allocation. This is due to the code in this function:
If I'm reading this correctly, memory is allocated as a function of both the passed gpu_memory_utilization
value, as well as some profiling of actual current and expected future utilization of the GPU. This means model1
will end up with the largest KV cache, and model3
will end up with the smallest because by the time we initialize model3
there are already two copies of the model that lower our measurement of free_gpu_memory
.
In my (offline batch inference) context, I am running 7B models and have noticed that VLLM is CPU bound, resulting in only about ~50% GPU core utilization. Historically, pre-VLLM, I would run multiple copies of a given model per GPU using multiprocessing
to improve throughput in such cases. With VLLM's gpu memory configuration, this becomes very hard to reason about, and contains potential race conditions.
Would the max KV cache size not be a factor of the max batch size and the max sequence length? @mgoin
Proposal to improve performance
Currently, vLLM allocates all available GPU memory after loading model weights, regardless of the max_model_len setting. This can lead to inefficient memory usage, especially for smaller models. I propose to modify this behavior as follows:
Calculate the exact memory required for the KV cache based on the max_model_len parameter, model architecture, and other relevant factors. Allocate only the necessary GPU memory for: a) Model weights b) Calculated KV cache size c) A small, fixed buffer for other operations Make the total GPU memory allocation a direct function of max_model_len, excluding the fixed memory used for model weights.
This change would:
Provide users with more precise control over GPU memory usage Allow for more efficient resource utilization, especially in multi-model or memory-constrained environments Make the max_model_len parameter directly impactful on memory allocation Potentially enable running larger models or multiple instances on a single GPU
Report of performance regression
No response
Misc discussion on performance
No response
Your current environment (if you think it is necessary)