vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.59k stars 3.9k forks source link

[Bug]: Excessive Memory Consumption of Cudagraph on A10G/L4 GPUs #5517

Closed ymwangg closed 2 months ago

ymwangg commented 3 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.5
Libc version: glibc-2.35

Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-1020-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
GPU 4: NVIDIA A10G
GPU 5: NVIDIA A10G
GPU 6: NVIDIA A10G
GPU 7: NVIDIA A10G

Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             192
On-line CPU(s) list:                0-191
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 7R32
CPU family:                         23
Model:                              49
Thread(s) per core:                 2
Core(s) per socket:                 48
Socket(s):                          2
Stepping:                           0
BogoMIPS:                           5600.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          3 MiB (96 instances)
L1i cache:                          3 MiB (96 instances)
L2 cache:                           48 MiB (96 instances)
L3 cache:                           384 MiB (24 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-47,96-143
NUMA node1 CPU(s):                  48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.5.0
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] transformers==4.41.2
[pip3] triton==2.3.0
[pip3] vllm-nccl-cu12==2.18.1.0.4.0
[conda] _anaconda_depends         2024.02             py311_mkl_1  
[conda] blas                      1.0                         mkl  
[conda] mkl                       2023.1.0         h213fc3f_46344  
[conda] mkl-service               2.4.0           py311h5eee18b_1  
[conda] mkl_fft                   1.3.8           py311h5eee18b_0  
[conda] mkl_random                1.2.4           py311hdb19cb5_0  
[conda] numpy                     1.26.4          py311h08b1b3b_0  
[conda] numpy-base                1.26.4          py311hf175353_0  
[conda] numpydoc                  1.5.0           py311h06a4308_0  
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] transformers              4.41.2                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  PHB PHB PHB PHB PHB PHB PHB 0-191   0-1     N/A
GPU1    PHB  X  PHB PHB PHB PHB PHB PHB 0-191   0-1     N/A
GPU2    PHB PHB  X  PHB PHB PHB PHB PHB 0-191   0-1     N/A
GPU3    PHB PHB PHB  X  PHB PHB PHB PHB 0-191   0-1     N/A
GPU4    PHB PHB PHB PHB  X  PHB PHB PHB 0-191   0-1     N/A
GPU5    PHB PHB PHB PHB PHB  X  PHB PHB 0-191   0-1     N/A
GPU6    PHB PHB PHB PHB PHB PHB  X  PHB 0-191   0-1     N/A
GPU7    PHB PHB PHB PHB PHB PHB PHB  X  0-191   0-1     N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

I noticed the memory consumption of Cudagraph with tensor parallelism on G5/G6 instances (A10G/L4 GPUs) is significantly higher than P4d instances (A100 GPU). I'm not sure if this is expected due to lack of nvlink support. It would be great if it can be mitigated since A10G/L4 GPUs have smaller memory capacity.

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="casperhansen/llama-3-70b-instruct-awq", tensor_parallel_size=8, distributed_executor_backend="ray", gpu_memory_utilization=0.85)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

Below is the memory consumption of cudagraph on different GPUs.

Instance Custom-all-reduce Disabled Cudagraph Memory (GB)
G5.48x (8xA10G) Yes 3.6348
P4d (8xA100) Yes 1.1875
P4d (8xA100) No 0.6172

cc @youkaichao any suggestions?

Update: This discrepancy is due to cuda driver versions. Using cuda 12.4 or later significantly reduces the cudagraph memory usage.

youkaichao commented 3 months ago

If disabling custom allreduce increases the cudagraph memory, then i suppose this is caused by NCCL. It is machine topology dependent.

youkaichao commented 3 months ago

I have a script to test this. You can have a try. It can tell how much memory nccl costs.

trevor-m commented 3 months ago

Is the increased memory usage only happening on A10G when cudagraphs is used? What's the memory usage without using cuda graphs? It would also be helpful to run with env var NCCL_DEBUG=INFO and see the logs from that.

ymwangg commented 2 months ago

@trevor-m Hi Trevor, nice to see you here. This issue tends out to be related to cuda driver version.

Update: This discrepancy is due to cuda driver version. Using cuda 12.4 or later significantly reduces the cudagraph memory usage.

gpu type cuda_12.2.2_535.104.05 cuda_12.3.2_545.23.08 cuda_12.4.1_550.54.15
a10g 3.8086 3.8086 1.1367
L4 3.7891 3.7891 1.1367
youkaichao commented 2 months ago

@ymwangg do you mean cuda runtime version or cuda driver version?

ymwangg commented 2 months ago

@youkaichao cuda driver version. In the experiment, I reinstalled cuda driver but the cuda toolkit is kept unchanged (12.1).

youkaichao commented 2 months ago

iiuc, cuda driver version is something like 555.42.02 . The cuda version CUDA Version: 12.5 is just the highest cuda runtime version it can support. See the documentation for details.

Can you report the cuda driver version instead?

ymwangg commented 2 months ago

Right. I've updated the table.