Closed hrson-1203 closed 4 days ago
To split the model across GPUs, you should set tensor_parallel_size
argument to the number of GPUs.
To split the model across GPUs, you should set
tensor_parallel_size
argument to the number of GPUs.
@DarkLight1337
Of course, I tried it with 2 A100 settings.
If it's 11B, wouldn't it be possible to use approximately 22GB of memory?
You should also consider the memory required for inference, not just the model weights. If you run into OOM issues, you may need to reduce max_model_len
and/or max_num_seqs
as shown in the example script.
You should also consider the memory required for inference, not just the model weights. If you run into OOM issues, you may need to reduce
max_model_len
and/ormax_num_seqs
as shown in the example script.
@DarkLight1337
llm = LLM(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
tensor_parallel_size=2,
max_model_len=4096,
gpu_memory_utilization=0.8,
trust_remote_code=True, # !
)
When I run the above code, lines 338 to 339 of llm_engine.py
if not self.model_config.embedding_mode:
self._initialize_kv_caches()
An error occurs when running here.
How did you install vLLM? I see in the output of collect_env.py
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: N/A
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
�[4mGPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID�[0m
GPU0 X NV12 0-15,32-47 0 N/A
GPU1 NV12 X 0-15,32-47 0 N/A
How did you install vLLM? I see in the output of
collect_env.py
ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: N/A vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: �[4mGPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID�[0m GPU0 X NV12 0-15,32-47 0 N/A GPU1 NV12 X 0-15,32-47 0 N/A
I don't know why vllm version is not included;;
I'm using vllm==0.6.2
What is the command you used to install vLLM?
What is the command you used to install vLLM?
I created a conda virtual environment and installed vllm using the pip install vllm
command.
@youkaichao @dtrifiro There seems to be something wrong with collect_env.py
, can you look into this? I suspect it has something to do with the recent change to using setuptools-scm.
When �I run the above code, lines 338 to 339 of llm_engine.py
if not self.model_config.embedding_mode: self._initialize_kv_caches()
An error occurs when running here.
Can you show more about the error?
When �I run the above code, lines 338 to 339 of llm_engine.py
if not self.model_config.embedding_mode: self._initialize_kv_caches()
An error occurs when running here.
Can you show more about the error?
I'm continuing debugging now;; Other than the error message that appeared when I first wrote the issue, nothing appears.
I'm still debugging so I don't know exactly where the error occurred.
@DarkLight1337
File "/home/heerak/miniconda3/envs/eval/lib/python3.10/site-packages/vllm/model_executor/models/mllama.py", line 1084, in forward cross_attention_states = self.vision_model(pixel_values,
In this part, OOM occurs as the memory continues to increase as it passes forward.
@hrson-1203 You should only need to set max_num_seqs=16
and enforce_eager=True
in order to launch the model.
max_num_seqs=16
andenforce_eager=True
Oh, it finally works.
Could you also explain why we need to use max_num_seqs=16
and enforce_eager=True
?
Could you also explain why we need to use
max_num_seqs=16
andenforce_eager=True
?
Mostly because of its architecture.
max_num_seqs=16
: this model has a context length of 128k+ plus the additional block tables for cross-attention layers, so the default setting max_num_seqs=256
won't work.
enforce_eager=True
: By default we turned on cuda graph for decoder-only language models, but the cross attention layers in this model are only needed at inference time if there's an image. This dynamic nature is incompatible with the current cuda graph implementation, and supporting it is WIP.
Could you also explain why we need to use
max_num_seqs=16
andenforce_eager=True
?Mostly because of its architecture.
max_num_seqs=16
: this model has a context length of 128k+ plus the additional block tables for cross-attention layers, so the default settingmax_num_seqs=256
won't work.
enforce_eager=True
: By default we turned on cuda graph for decoder-only language models, but the cross attention layers in this model are only needed at inference time if there's an image. This dynamic nature is compatible with the current cuda graph implementation, and supporting it is WIP.
thank you so much for explaining
Thanks to you, I was able to test it.
What parts of each model should I look at to figure out if such a setting is necessary?
What parts of each model should I look at to figure out if such a setting is necessary?
For most models, you don't need to worry about enforcing eager mode unless you need additional VRAM (This is because cuda graphs themselves also consume some memory), and if you run into OOM issue, always try lowering max_num_seqs
.
I'm trying to run meta-llama/Llama-3.2-11B-Vision-Instruct
using vLLM docker:
GPU Server specifications:
vLLM Docker run command:
docker run --gpus all \
-v /data/hf_cache/ \
--env "HUGGING_FACE_HUB_TOKEN=<token>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model meta-llama/Llama-3.2-11B-Vision-Instruct \
--tensor-parallel-size 4 \
--max-model-len 4096 \
--download_dir /data/vllm_cache \
--enforce-eager
Facing similar issue. Have raised a new issue: [Usage]: DOCKER - Getting OOM while running meta-llama/Llama-3.2-11B-Vision-Instruct
I'm trying to run
meta-llama/Llama-3.2-11B-Vision-Instruct
using vLLM docker:GPU Server specifications:
- GPU Count: 4
- GPU Type: A100 - 80GB
vLLM Docker run command:
docker run --gpus all \ -v /data/hf_cache/ \ --env "HUGGING_FACE_HUB_TOKEN=<token>" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:latest \ --model meta-llama/Llama-3.2-11B-Vision-Instruct \ --tensor-parallel-size 4 \ --max-model-len 4096 \ --download_dir /data/vllm_cache \ --enforce-eager
Facing similar issue.
As mentioned above, you should limit --max-num-seqs
to a smaller value, e.g. 16.
@DarkLight1337 fix is included in https://github.com/vllm-project/vllm/pull/8900
Your current environment
Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.1.66 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe
Nvidia driver version: 535.171.04 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 57 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 106 Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz Stepping: 6 CPU MHz: 806.789 CPU max MHz: 3500.0000 CPU min MHz: 800.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 1.5 MiB L1i cache: 1 MiB L2 cache: 40 MiB L3 cache: 48 MiB NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.535.161 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==25.1.2 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.0 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.535.161 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.68 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.45.0 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: N/A vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: [4mGPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID[0m GPU0 X NV12 0-15,32-47 0 N/A GPU1 NV12 X 0-15,32-47 0 N/A
Legend:
X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks
How would you like to use vllm
I want to run inference of a meta-llama/Llama-3.2-11B-Vision-Instruct.
I tried to load the multi-modal model into vllm and proceed with inference. However, even with two A100s, an OOM error occurred while loading the 11B model.
The error message below indicates that only A100 was used. Even if both are used, the same OOM occurs.
How can I load the Llama-3.2-11B-Vision-Instruct model with vllm?
Before submitting a new issue...