vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.49k stars 4.43k forks source link

[Bug]: Llama-3.2-11B-Vision gives OOM error on 96GB H100 #9630

Open khayamgondal opened 1 week ago

khayamgondal commented 1 week ago

Your current environment

The output of `python collect_env.py` ``` Collecting environment information... PyTorch version: 2.4.0a0+3bcc3cddb5.nv24.07 Is debug build: False CUDA used to build PyTorch: 12.5 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (aarch64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.0 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.5.0-1024-nvidia-64k-aarch64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GH200 480GB Nvidia driver version: 550.107.02 cuDNN version: Probably one of the following: /usr/lib/aarch64-linux-gnu/libcudnn.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_adv.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_cnn.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_graph.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_heuristic.so.9.2.1 /usr/lib/aarch64-linux-gnu/libcudnn_ops.so.9.2.1 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 72 On-line CPU(s) list: 0-71 Vendor ID: ARM Model name: Neoverse-V2 Model: 0 Thread(s) per core: 1 Core(s) per socket: 72 Socket(s): 1 Stepping: r0p0 Frequency boost: disabled CPU max MHz: 3456.0000 CPU min MHz: 81.0000 BogoMIPS: 2000.00 Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti L1d cache: 4.5 MiB (72 instances) L1i cache: 4.5 MiB (72 instances) L2 cache: 72 MiB (72 instances) L3 cache: 114 MiB (1 instance) NUMA node(s): 9 NUMA node0 CPU(s): 0-71 NUMA node1 CPU(s): NUMA node2 CPU(s): NUMA node3 CPU(s): NUMA node4 CPU(s): NUMA node5 CPU(s): NUMA node6 CPU(s): NUMA node7 CPU(s): NUMA node8 CPU(s): Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] jupyterlab_nvidia_nsight==0.6.0 [pip3] numpy==1.26.4 [pip3] nvidia-cudnn-frontend==1.5.1 [pip3] nvidia-dali-cuda120==1.39.0 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-modelopt==0.13.0 [pip3] nvidia-nvimgcodec-cu12==0.2.0.7 [pip3] nvidia-pyindex==1.0.9 [pip3] onnx==1.16.0 [pip3] optree==0.12.1 [pip3] pytorch-triton==3.0.0+989adb9a2 [pip3] pyzmq==26.0.3 [pip3] torch==2.4.0a0+3bcc3cddb5.nv24.7 [pip3] torch-tensorrt==2.5.0a0 [pip3] torchvision==0.19.0a0 [pip3] transformers==4.45.2 [pip3] triton==3.0.0+git692143cd [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.3.post2.dev23+g440130a9 vLLM Build Flags: CUDA Archs: 9.0+PTX; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 NIC0 NIC1 NIC2 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS SYS SYS 0-71 0 1 NIC0 SYS X SYS SYS NIC1 SYS SYS X PIX NIC2 SYS SYS PIX X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 NIC1: mlx5_1 NIC2: mlx5_2 ```

Model Input Dumps

No response

🐛 Describe the bug

vllm serve fails with OOM error.

INFO 10-23 19:55:16 model_runner.py:1067] Loading model weights took 19.8557 GB
INFO 10-23 19:55:16 enc_dec_model_runner.py:301] Starting profile run for multi-modal models.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 390, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 139, in from_engine_args
    return cls(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 78, in __init__
    self.engine = LLMEngine(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 348, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 483, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 223, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/enc_dec_model_runner.py", line 359, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/enc_dec_model_runner.py", line 203, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 1254, in forward
    cross_attention_states = self.get_cross_attention_states(
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 1143, in get_cross_attention_states
    cross_attention_states = self.vision_model(pixel_values,
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 598, in forward
    hidden_state = self.global_transformer(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 453, in forward
    hidden_states = encoder_layer(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 421, in forward
    hidden_state = self.mlp(hidden_state)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/clip.py", line 279, in forward
    hidden_states = self.activation_fn(hidden_states)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/activation.py", line 704, in forward
    return F.gelu(input, approximate=self.approximate)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 10.08 GiB. GPU 0 has a total capacity of 94.50 GiB of which 10.06 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 80.48 GiB is allocated by PyTorch, and 3.26 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Traceback (most recent call last):
  File "/usr/local/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/vllm/scripts.py", line 195, in main
    args.dispatch_function(args)
  File "/usr/local/lib/python3.10/dist-packages/vllm/scripts.py", line 41, in serve
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 552, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 107, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start

I am running server like vllm serve /models/LLM/Llama-3.2-11B-Vision --disable-log-requests --gpu-memory-utilization 0.2 I also tried different values for gpu-memory-utilization

VLLM version == 0.6.3

Before submitting a new issue...

DarkLight1337 commented 1 week ago

Just a sanity check, are you running other processes that use that GPU at the same time as vLLM? You can try to reduce the memory usage by setting --max-model-len and/or --max-num-seqs. On the other hand, --gpu-memory-utilization should be set to a higher value to let vLLM use more GPU.

khayamgondal commented 1 week ago

Thanks, I am not running other processes.

I am still seeing OOM even with specifying max-model-len=32 vllm serve /models/LLM/Llama-3.2-11B-Vision --disable-log-requests --gpu-memory-utilization 0.9 --max-model-len 32

OOM errors are thrown after INFO 10-24 18:10:17 enc_dec_model_runner.py:301] Starting profile run for multi-modal models.

With max-num-seqs I see a different error (below)

vllm serve /models/LLM/Llama-3.2-11B-Vision --disable-log-requests --gpu-memory-utilization 0.9 --max-model-len 512 --max-num-seqs 128

INFO 10-24 18:12:26 model_runner.py:1067] Loading model weights took 19.8557 GB
INFO 10-24 18:12:26 enc_dec_model_runner.py:301] Starting profile run for multi-modal models.
INFO 10-24 18:12:39 gpu_executor.py:122] # GPU blocks: 2252, # CPU blocks: 1638
INFO 10-24 18:12:39 gpu_executor.py:126] Maximum concurrency for 512 tokens per request: 70.38x
INFO 10-24 18:12:41 model_runner.py:1395] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 10-24 18:12:41 model_runner.py:1399] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1795, in capture
    output_hidden_or_intermediate_states = self.model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1552, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1561, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/mllama.py", line 1233, in forward
    skip_cross_attention = max(attn_metadata.encoder_seq_lens) == 0
RuntimeError: CUDA error: operation not permitted when stream is capturing
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 390, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 139, in from_engine_args
    return cls(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 78, in __init__
    self.engine = LLMEngine(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 348, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 496, in _initialize_kv_caches
    self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 129, in initialize_cache
    self.driver_worker.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 271, in initialize_cache
    self._warm_up_model()
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 287, in _warm_up_model
    self.model_runner.capture_model(self.gpu_cache)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1515, in capture_model
    graph_runner.capture(**capture_inputs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1794, in capture
    with torch.cuda.graph(self._graph, pool=memory_pool, stream=stream):
  File "/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py", line 184, in __exit__
    self.cuda_graph.capture_end()
  File "/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py", line 82, in capture_end
    super().capture_end()
RuntimeError: CUDA error: operation failed due to a previous error during capture
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last):
  File "/usr/local/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/vllm/scripts.py", line 195, in main
    args.dispatch_function(args)
  File "/usr/local/lib/python3.10/dist-packages/vllm/scripts.py", line 41, in serve
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 552, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 107, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start

Even with specifying cpu-offload-gb I see OOM

Looks like OOM happens after Starting profile run for multi-modal models.

To my surprise I can load NVLM using vllm serve /models/LLM/NVLM-D-72B/ --disable-log-requests --gpu-memory-utilization 0.7 --cpu-offload-gb 100 --max-model-len 512

DarkLight1337 commented 1 week ago

Can you try the settings shown in the example script?