vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.74k stars 4.49k forks source link

[Bug]: torch.cuda.OutOfMemoryError: CUDA out of memory when Handle inference requests #5147

Open zhaotyer opened 5 months ago

zhaotyer commented 5 months ago

Your current environment

The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.2.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.31

Python version: 3.8.10 (default, Nov 22 2023, 10:22:35)  [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB

Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构:                           x86_64
CPU 运行模式:                   32-bit, 64-bit
字节序:                         Little Endian
Address sizes:                   52 bits physical, 57 bits virtual
CPU:                             56
在线 CPU 列表:                  0-55
每个核的线程数:                 1
每个座的核数:                   28
座:                             2
NUMA 节点:                      2
厂商 ID:                        GenuineIntel
CPU 系列:                       6
型号:                           106
型号名称:                       Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
步进:                           6
Frequency boost:                 enabled
CPU MHz:                        800.000
CPU 最大 MHz:                   2601.0000
CPU 最小 MHz:                   800.0000
BogoMIPS:                       5200.00
虚拟化:                         VT-x
L1d 缓存:                       2.6 MiB
L1i 缓存:                       1.8 MiB
L2 缓存:                        70 MiB
L3 缓存:                        84 MiB
NUMA 节点0 CPU:                 0-27
NUMA 节点1 CPU:                 28-55
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
标记:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-nccl-cu11==2.19.3
[pip3] onnx==1.15.0
[pip3] paddle2onnx==1.1.0
[pip3] torch==2.2.1+cu118
[pip3] torchaudio==2.2.1+cu118
[pip3] torchtext==0.5.0
[pip3] torchvision==0.17.1+cu118
[pip3] triton==2.2.0
[pip3] tritonclient==2.19.0
[pip3] vllm-nccl-cu11==2.18.1.0.1.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     PXB     PXB     0-27    0               N/A
GPU1    SYS      X      SYS     SYS     28-55   1               N/A
NIC0    PXB     SYS      X      PIX
NIC1    PXB     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

🐛 Describe the bug

code is tritonserver+vllmyou can find code in #https://github.com/triton-inference-server/vllm_backend/blob/main/src/model.py

Error message:
File "/workspace/Qwen1.5-14B-Chat/./atom/1/model.py", line 1503, in vllm_response_thread
    async for request_output in results_generator:
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 661, in generate
    raise e
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 655, in generate
    async for request_output in stream:
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 77, in __anext__
    raise result
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
    task.result()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 496, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.8/asyncio/tasks.py", line 494, in wait_for
    return fut.result()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 470, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 213, in step_async
    output = await self.model_executor.execute_model_async(
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/ray_gpu_executor.py", line 418, in execute_model_async
    all_outputs = await self._run_workers_async(
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/ray_gpu_executor.py", line 408, in _run_workers_async
    all_outputs = await asyncio.gather(*coros)
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker_base.py", line 158, in execute_method
    raise e
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker_base.py", line 149, in execute_method
    return executor(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 249, in execute_model
    output = self.model_runner.execute_model(seq_group_metadata_list,
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 848, in execute_model
    hidden_states = model_executable(**execute_model_kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 315, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 252, in forward
    hidden_states, residual = layer(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 215, in forward
    hidden_states = self.mlp(hidden_states)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 74, in forward
    gate_up, _ = self.gate_up_proj(x)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/linear.py", line 242, in forward
    output_parallel = self.linear_method.apply_weights(self, input_, bias)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/linear.py", line 104, in apply_weights
    return F.linear(x, weight, bias)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 824.00 MiB. GPU 0 has a total capacity of 79.15 GiB of which 770.38 MiB is free. Process 183127 has 478.00 MiB memory in use. Process 183821 has 478.00 MiB memory in use. Process 187106 has 13.02 GiB memory in use. Process 187121 has 1.47 GiB memory in use. Process 999332 has 478.00 MiB memory in use. Process 1001839 has 23.42 GiB memory in use. Process 281733 has 478.00 MiB memory in use. Process 282788 has 38.54 GiB memory in use. Of the allocated memory 37.12 GiB is allocated by PyTorch, and 578.33 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

GPU memory usage after model loading 1717126098(1)

vllm args
ENFORCE_EAGER: True
model info
Qwen1.5-14B-Chat
request info
batch:128
prompt len:5000
output len:5000

The model has already performed self.model_runner.profile_run() before block allocation and calculated the peak_memory Why does gpu memory oom during inference?

zhaotyer commented 5 months ago

Is anyone investigating this issue?

DarkLight1337 commented 4 months ago

May be fixed by #5355.

github-actions[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!