vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
23.76k stars 3.41k forks source link

[Bug]: When starting deepseek-coder-v2-lite-instruct with vllm on 4 GPUs, one of them is at 0%. #6156

Open fengyang95 opened 3 weeks ago

fengyang95 commented 3 weeks ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.31

Python version: 3.9.2 (default, Feb 28 2021, 17:03:44)  [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40

Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   52 bits physical, 57 bits virtual
CPU(s):                          180
On-line CPU(s) list:             0-179
Thread(s) per core:              2
Core(s) per socket:              45
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           143
Model name:                      Intel(R) Xeon(R) Platinum 8457C
Stepping:                        8
CPU MHz:                         2599.520
BogoMIPS:                        5199.04
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       4.2 MiB
L1i cache:                       2.8 MiB
L2 cache:                        180 MiB
L3 cache:                        195 MiB
NUMA node0 CPU(s):               0-89
NUMA node1 CPU(s):               90-179
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Mitigation; TSX disabled
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear arch_capabilities

Versions of relevant libraries:
[pip3] byted-torch==2.1.0.post2
[pip3] numpy==1.26.2
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchaudio==2.1.0+cu121
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    NODE    SYS     SYS     1,4-89  0               N/A
GPU1    NODE     X      NODE    SYS     SYS     1,4-89  0               N/A
GPU2    NODE    NODE     X      SYS     SYS     1,4-89  0               N/A
GPU3    SYS     SYS     SYS      X      SYS     91,94-179       1               N/A
NIC0    SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

When starting deepseek-coder-v2-lite-instruct with vllm on 4 GPUs, one of them is at 0%. There is no issue when tensor_parallel_size=1.

截屏2024-07-05 下午11 53 29
llmpros commented 3 weeks ago

could you please try the latest vllm 0.5.1 ?

maxcccc commented 3 weeks ago

BTW, why did you want to run deepseek-coder-v2-lite-instruct with 4L40 ? I just plan to deploy it on our server with 2P40, so hope to know what's your reason.

fengyang95 commented 3 weeks ago

BTW, why did you want to run deepseek-coder-v2-lite-instruct with 4_L40 ? I just plan to deploy it on our server with 2_P40, so hope to know what's your reason.

1 L40 is enough; I'm just testing with four cards

fengyang95 commented 3 weeks ago

could you please try the latest vllm 0.5.1 ?

nice! I will. try with it

garyyang85 commented 2 weeks ago

Hi @fengyang95 I tried deepseek-coder-v2-lite-instruct can be started on 2 x L40 GPU,but the context cannot reach 128K, only 9415 tokens in my test. Did you encountered same issue? Below is my start cmd.

python3 -m vllm.entrypoints.openai.api_server --dtype float16 --trust-remote-code --model DeepSeek-Coder-V2-Lite-Instruct --port 9000 --host 0.0.0.0    --tensor-parallel-size 2 --max-seq-len 63040 --max-model-len 30720

When I remove the --max-seq-len 63040 --max-model-len 30720, it will reports error when start:

[rank0]: ValueError: The model's max seq len (163840) is larger than the maximum number of tokens that can be stored in KV cache (63040). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
fengyang95 commented 2 weeks ago

Hi @fengyang95 I tried deepseek-coder-v2-lite-instruct can be started on 2 x L40 GPU,but the context cannot reach 128K, only 9415 tokens in my test. Did you encountered same issue? Below is my start cmd.

python3 -m vllm.entrypoints.openai.api_server --dtype float16 --trust-remote-code --model DeepSeek-Coder-V2-Lite-Instruct --port 9000 --host 0.0.0.0    --tensor-parallel-size 2 --max-seq-len 63040 --max-model-len 30720

When I remove the --max-seq-len 63040 --max-model-len 30720, it will reports error when start:

[rank0]: ValueError: The model's max seq len (163840) is larger than the maximum number of tokens that can be stored in KV cache (63040). Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Yes, you need to reduce max_len.