vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.12k stars 4.35k forks source link

[Bug]: When using vllm in a Ray actor, the error "No CUDA GPUs are available" occurs. #6896

Closed wyooyw closed 2 months ago

wyooyw commented 2 months ago

Your current environment

PyTorch version: 2.3.0a0+40ec155e58.nv24.03
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.35

Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.19.91-014-kangaroo.2.10.13.5c249cdaf.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800

Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   52 bits physical, 57 bits virtual
Byte Order:                      Little Endian
CPU(s):                          8
On-line CPU(s) list:             0-7
Vendor ID:                       GenuineIntel
Model name:                      Intel(R) Xeon(R) Processor
CPU family:                      6
Model:                           143
Thread(s) per core:              1
Core(s) per socket:              8
Socket(s):                       1
Stepping:                        8
BogoMIPS:                        5200.00
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 wbnoinvd avx512vbmi umip pku waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear arch_capabilities
Virtualization:                  VT-x
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       384 KiB (8 instances)
L1i cache:                       256 KiB (8 instances)
L2 cache:                        16 MiB (8 instances)
L3 cache:                        97.5 MiB (1 instance)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-7
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2:        Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.15.0rc2
[pip3] optree==0.10.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==2.2.0+e28a256d7
[pip3] torch==2.3.0a0+40ec155e58.nv24.3
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchtext==0.17.0a0
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.18.0a0
[pip3] transformers==4.43.2
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.2
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    NIC0    NIC1    CPU Affinity    NUMA Affinity
GPU0     X      NV8     PHB     PHB     0-7             N/A
GPU1    NV8      X      PHB     PHB     0-7             N/A
NIC0    PHB     PHB      X      PHB
NIC1    PHB     PHB     PHB      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

🐛 Describe the bug

We used VLLM to execute the qwen2 model with TP=4 in Ray Actor, but one of the four processes reported an error "No CUDA GPUs are available".

When executing vllm separately without Ray Actor wrapper, it can be executed normally.

[rank0]: ray.exceptions.ActorDiedError: The actor died because of an error raised in its creation task, ray::LLMRayActor.__init__() (pid=410475, ip=10.207.66.36, actor_id=90f87b7c95de4bb49deafe5c01000000, repr=<rlhf.vllm_generation.vllm_engine.LLMRayActor object at 0x7f8201a56080>)
[rank0]:   File "/cpfs/2926428ee2463e44/user/user1/rlhf/vllm_generation/vllm_engine.py", line 58, in __init__
[rank0]:     self.llm = vllm.LLM(*args, load_format=LoadFormat.AUTO, **kwargs)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/llm.py", line 156, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 440, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 250, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 25, in __init__
[rank0]:     super().__init__(*args, **kwargs)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 47, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 61, in _init_executor
[rank0]:     self._init_workers_ray(placement_group)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 233, in _init_workers_ray
[rank0]:     self._run_workers("init_device")
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 350, in _run_workers
[rank0]:     self.driver_worker.execute_method(method, *driver_args,
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 383, in execute_method
[rank0]:     raise e
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 374, in execute_method
[rank0]:     return executor(*args, **kwargs)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 124, in init_device
[rank0]:     torch.cuda.set_device(self.device)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py", line 424, in set_device
[rank0]:     torch._C._cuda_setDevice(device)
[rank0]:   File "/usr/local/lib/python3.10/dist-packages/torch/cuda/__init__.py", line 318, in _lazy_init
[rank0]:     torch._C._cuda_init()
[rank0]: RuntimeError: No CUDA GPUs are available
tjohnson31415 commented 2 months ago

Hello @wyooyw. I notice that your environment information shows that you have 2 GPUs:

GPU models and configuration: 
GPU 0: NVIDIA H800
GPU 1: NVIDIA H800

Nvidia driver version: 525.105.17

Using 4-way tensor parallel with TP=4 will require 4 GPUs. So the error that "No CUDA GPUs are available" seems accurate.

When it worked without Ray do you also mean without TP=4?

Jack47 commented 2 months ago

sorry, it's a typo. we have two gpus and we use tp=2. There are more details: we have two test cases, one passed and another one failed with "No CUDA GPUs are available" as described above.

passed:

    self.llm = vllm.LLM(*args, **kwargs)

failed:

    from rlhf.vllm_generation.vlm.model_loader import MyMegatronLoader # this is my customized model loader
    self.llm = vllm.LLM(*args, load_format=AgiVlmMegatronLoader, **kwargs)
tjohnson31415 commented 2 months ago

Ah, that makes more sense.

The second test fails even when it is the only test run in the Python session, right? If it only fails when running after the first test, then it may just be problem with the garbage collection not fully removing its usage of the GPUs.

If the second test fails on its own where the first one passes, then it seems that the issue would be coming from your customized model loader.

denadai2 commented 2 months ago

Related to #7013?

youkaichao commented 2 months ago

the latest main branch already fixed this, via upgrading to pytorch 2.4