vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.61k stars 3.9k forks source link

[Performance]: max_model_len argument to LLM class does not limit GPU utilization. #7155

Open gnpinkert opened 1 month ago

gnpinkert commented 1 month ago

Proposal to improve performance

Currently, vLLM allocates all available GPU memory after loading model weights, regardless of the max_model_len setting. This can lead to inefficient memory usage, especially for smaller models. I propose to modify this behavior as follows:

Calculate the exact memory required for the KV cache based on the max_model_len parameter, model architecture, and other relevant factors. Allocate only the necessary GPU memory for: a) Model weights b) Calculated KV cache size c) A small, fixed buffer for other operations Make the total GPU memory allocation a direct function of max_model_len, excluding the fixed memory used for model weights.

This change would:

Provide users with more precise control over GPU memory usage Allow for more efficient resource utilization, especially in multi-model or memory-constrained environments Make the max_model_len parameter directly impactful on memory allocation Potentially enable running larger models or multiple instances on a single GPU

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX 6000 Ada Generation
GPU 1: NVIDIA RTX 6000 Ada Generation

Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             32
On-line CPU(s) list:                0-31
Vendor ID:                          AuthenticAMD
Model name:                         AMD Ryzen Threadripper PRO 5955WX 16-Cores
CPU family:                         25
Model:                              8
Thread(s) per core:                 2
Core(s) per socket:                 16
Socket(s):                          1
Stepping:                           2
Frequency boost:                    enabled
CPU max MHz:                        7031.2500
CPU min MHz:                        1800.0000
BogoMIPS:                           7984.58
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualisation:                     AMD-V
L1d cache:                          512 KiB (16 instances)
L1i cache:                          512 KiB (16 instances)
L2 cache:                           8 MiB (16 instances)
L3 cache:                           64 MiB (2 instances)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.16.1
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.42.4
[pip3] triton==2.3.1
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     0-31    0               N/A
GPU1    PHB      X      0-31    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks
mgoin commented 1 month ago

vLLM allocates more kv cache space than required for a single max model length because it is an LLM server that handles batching multiple requests together for better throughput. We have gpu_memory_utilization as a highlighted parameter to directly control what percentage of the GPU memory is allocated to control this. The max_model_len is actually the minimum of required space to serve a possible single request, the rest of the kv cache is to allow vLLM to serve multiple requests in parallel.

If you want to run multiple small models on the same GPU, simply start each engine with a gpu_memory_utilization according to their portion i.e.

from vllm import LLM

model1 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)
model2 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)
model3 = LLM("facebook/opt-125m", gpu_memory_utilization=0.3)

print(model1.generate("Hello!"))
"""
Processed prompts: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 13.57it/s, est. speed input: 40.76 toks/s, output: 217.33 toks/s]
[RequestOutput(request_id=0, prompt='Hello!', prompt_token_ids=[2, 31414, 328], prompt_logprobs=None, outputs=[CompletionOutput(index=0, text=' That is my dad. He was a wautdig with me when I was', token_ids=(280, 16, 127, 4252, 4, 91, 21, 10, 885, 4255, 20098, 19, 162, 77, 38, 21), cumulative_logprob=None, logprobs=None, finish_reason=length, stop_reason=None)], finished=True, metrics=RequestMetrics(arrival_time=1722878319.5231462, last_token_time=1722878319.5231462, first_scheduled_time=1722878319.5450768, first_token_time=1722878319.563497, time_in_queue=0.021930694580078125, finished_time=1722878319.618218), lora_request=None)]
"""
cmwilhelm commented 1 month ago

@mgoin I have a related concern to @gnpinkert. In your example of creating multiple instances of LLM for the same model, I don't believe the copies will actually end up with the same memory allocation. This is due to the code in this function:

https://github.com/vllm-project/vllm/blob/57f560aa23077ed9def5952ab81a65bc080ae234/vllm/worker/worker.py#L161-L206

If I'm reading this correctly, memory is allocated as a function of both the passed gpu_memory_utilization value, as well as some profiling of actual current and expected future utilization of the GPU. This means model1 will end up with the largest KV cache, and model3 will end up with the smallest because by the time we initialize model3 there are already two copies of the model that lower our measurement of free_gpu_memory.

In my (offline batch inference) context, I am running 7B models and have noticed that VLLM is CPU bound, resulting in only about ~50% GPU core utilization. Historically, pre-VLLM, I would run multiple copies of a given model per GPU using multiprocessing to improve throughput in such cases. With VLLM's gpu memory configuration, this becomes very hard to reason about, and contains potential race conditions.

gnpinkert commented 1 month ago

Would the max KV cache size not be a factor of the max batch size and the max sequence length? @mgoin