vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.92k stars 4.12k forks source link

[Bug]: GH200 MGX platform serving is broken after the cupy dependency addition #3744

Open arvindsun opened 6 months ago

arvindsun commented 6 months ago

Your current environment

ollecting environment information...
PyTorch version: 2.2.0a0+81ea7a4
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-1012-nvidia-64k-aarch64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: GH200 480GB
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       aarch64
CPU op-mode(s):                     64-bit
Byte Order:                         Little Endian
CPU(s):                             72
On-line CPU(s) list:                0-71
Vendor ID:                          ARM
Model:                              0
Thread(s) per core:                 1
Core(s) per socket:                 72
Socket(s):                          1
Stepping:                           r0p0
Frequency boost:                    disabled
CPU max MHz:                        3465.0000
CPU min MHz:                        81.0000
BogoMIPS:                           2000.00
Flags:                              fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh bti
L1d cache:                          4.5 MiB (72 instances)
L1i cache:                          4.5 MiB (72 instances)
L2 cache:                           72 MiB (72 instances)
L3 cache:                           114 MiB (1 instance)
NUMA node(s):                       9
NUMA node0 CPU(s):                  0-71
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):
NUMA node4 CPU(s):
NUMA node5 CPU(s):
NUMA node6 CPU(s):
NUMA node7 CPU(s):
NUMA node8 CPU(s):
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; __user pointer sanitization
Vulnerability Spectre v2:           Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.15.0rc2
[pip3] optree==0.10.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.2.0a0+81ea7a4
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchdata==0.7.0a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.17.0a0
[pip3] triton==2.1.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.3.0
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    NIC0    NIC1    NIC2    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     0-71    0               1
NIC0    SYS      X      PIX     SYS
NIC1    SYS     PIX      X      SYS
NIC2    SYS     SYS     SYS      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_bond_0

🐛 Describe the bug

Starting with https://github.com/vllm-project/vllm/commit/a463c333dd7905519141abe4f61b63ccc6b739a9 multinode serving using Ray on the MGX platform (NVidia GH200) broke. The basic issue is that CUPY NCCL is not available on ARM - and this causes a bunch of issues. I tried some patches to disable NCCL, but there is a hang during worker initialization when we have more than one node.

I can followup with more details - have also filed an issue against cupy - https://github.com/cupy/cupy/issues/8254

If it is easy to disable cupy on arm that could also be an option while this is being fixed.

youkaichao commented 6 months ago

Hi, several things to notice:

  1. we don't support pytorch 2.2.0 yet, because it depends on nccl 2.19.3 , which has a bug to be fixed https://github.com/NVIDIA/nccl/issues/1234 .
  2. we recently removed cupy dependency ( https://github.com/vllm-project/vllm/pull/3625 ) due to lots of bug report about cupy . Can you try to build from source with the latest main branch to see if it helps?