vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
23.86k stars 3.42k forks source link

[Bug]: CUDA error when using multiple GPUs #6147

Open ndao600 opened 3 weeks ago

ndao600 commented 3 weeks ago

Your current environment

(vllm) nd600@PC-7C610BFD7B:~$ python collect_env.py
Collecting environment information...
/home/nd600/miniconda3/envs/vllm/lib/python3.10/site-packages/torch/cuda/__init__.py:118: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
  return torch._C._cuda_getDeviceCount() > 0
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX 4000 Ada Generation
GPU 1: NVIDIA RTX 4000 Ada Generation
GPU 2: NVIDIA RTX 4000 Ada Generation
GPU 3: NVIDIA GeForce RTX 4080 SUPER

Nvidia driver version: 555.85
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             48
On-line CPU(s) list:                0-47
Vendor ID:                          AuthenticAMD
Model name:                         AMD Ryzen Threadripper 7960X 24-Cores
CPU family:                         25
Model:                              24
Thread(s) per core:                 2
Core(s) per socket:                 24
Socket(s):                          1
Stepping:                           1
BogoMIPS:                           8387.48
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Virtualization:                     AMD-V
Hypervisor vendor:                  Microsoft
Virtualization type:                full
L1d cache:                          768 KiB (24 instances)
L1i cache:                          768 KiB (24 instances)
L2 cache:                           24 MiB (24 instances)
L3 cache:                           32 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] blas                      1.0                         mkl
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] libjpeg-turbo             2.0.0                h9bf148f_0    pytorch
[conda] mkl                       2023.1.0         h213fc3f_46344
[conda] mkl-service               2.4.0           py310h5eee18b_1
[conda] mkl_fft                   1.3.8           py310h5eee18b_0
[conda] mkl_random                1.2.4           py310hdb19cb5_0
[conda] numpy                     1.26.4          py310h5f9d8c6_0
[conda] numpy-base                1.26.4          py310hb5e798b_0
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] pytorch-cuda              12.1                 ha16c6d3_5    pytorch
[conda] pytorch-mutex             1.0                        cuda    pytorch
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] torchaudio                2.3.1               py310_cu121    pytorch
[conda] torchvision               0.18.0                   pypi_0    pypi
[conda] transformers              4.42.3                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS                             N/A
GPU1    SYS      X      SYS     SYS                             N/A
GPU2    SYS     SYS      X      SYS                             N/A
GPU3    SYS     SYS     SYS      X                              N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

when i use all the gpus i always get this message below. when i run individual gpu, it works fine. additionally, whenever I run in combination of CUDA_VISIBLE_DEVICES number 1, always get the below message or something similar. I hope you can help. FYI, I have 4 gpus as you can see above. 3 are 4000 ada and 1 is 4080 super, not sure if this is the problem.

additionally info: nvidia-smi
Thu Jul  4 22:54:02 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.03              Driver Version: 555.85         CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX 4000 Ada Gene...    On  |   00000000:01:00.0 Off |                  Off |
| 34%   38C    P8              6W /  130W |       0MiB /  20475MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA RTX 4000 Ada Gene...    On  |   00000000:41:00.0 Off |                  Off |
| 34%   37C    P8              6W /  130W |      15MiB /  20475MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   2  NVIDIA RTX 4000 Ada Gene...    On  |   00000000:81:00.0  On |                  Off |
| 34%   52C    P2             33W /  130W |   13101MiB /  20475MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   3  NVIDIA GeForce RTX 4080 ...    On  |   00000000:82:00.0 Off |                  N/A |
| 30%   34C    P8              4W /  320W |   12410MiB /  16376MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

Error message:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m vllm.entrypoints.openai.api_server   --model Qwen/Qwen2-7B-Instruct   --host 0.0.0.0   --port 8000   --tensor-parallel-size 4
INFO 07-04 22:49:32 api_server.py:206] vLLM API server version 0.5.0.post1
INFO 07-04 22:49:32 api_server.py:207] args: Namespace(host='0.0.0.0', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='Qwen/Qwen2-7B-Instruct', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=4, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-04 22:49:32 config.py:698] Defaulting to use mp for distributed inference
INFO 07-04 22:49:32 llm_engine.py:169] Initializing an LLM engine (v0.5.0.post1) with config: model='Qwen/Qwen2-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=Qwen/Qwen2-7B-Instruct, use_v2_block_manager=False, enable_prefix_caching=False)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
(VllmWorkerProcess pid=1866) WARNING 07-04 22:49:33 utils.py:541] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
(VllmWorkerProcess pid=1867) WARNING 07-04 22:49:33 utils.py:541] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
(VllmWorkerProcess pid=1868) WARNING 07-04 22:49:33 utils.py:541] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
WARNING 07-04 22:49:33 utils.py:541] Using 'pin_memory=False' as WSL is detected. This may slow down the performance.
(VllmWorkerProcess pid=1866) Process VllmWorkerProcess:
(VllmWorkerProcess pid=1866) Traceback (most recent call last):
(VllmWorkerProcess pid=1866)   File "/home/nd600/miniconda3/envs/vllm/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
(VllmWorkerProcess pid=1866)     self.run()
(VllmWorkerProcess pid=1866)   File "/home/nd600/miniconda3/envs/vllm/lib/python3.10/multiprocessing/process.py", line 108, in run
(VllmWorkerProcess pid=1866)     self._target(*self._args, **self._kwargs)
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/executor/multiproc_worker_utils.py", line 210, in _run_worker_process
(VllmWorkerProcess pid=1866)     worker = worker_factory()
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/executor/gpu_executor.py", line 68, in _create_worker
(VllmWorkerProcess pid=1866)     wrapper.init_worker(**self._get_worker_kwargs(local_rank, rank,
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/worker/worker_base.py", line 334, in init_worker
(VllmWorkerProcess pid=1866)     self.worker = worker_class(*args, **kwargs)
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/worker/worker.py", line 85, in __init__
(VllmWorkerProcess pid=1866)     self.model_runner: GPUModelRunnerBase = ModelRunnerClass(
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/worker/model_runner.py", line 217, in __init__
(VllmWorkerProcess pid=1866)     self.attn_backend = get_attn_backend(
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/attention/selector.py", line 45, in get_attn_backend
(VllmWorkerProcess pid=1866)     backend = which_attn_to_use(num_heads, head_size, num_kv_heads,
(VllmWorkerProcess pid=1866)   File "/home/nd600/AIProjects/vllm/vllm/attention/selector.py", line 151, in which_attn_to_use
(VllmWorkerProcess pid=1866)     if torch.cuda.get_device_capability()[0] < 8:
(VllmWorkerProcess pid=1866)   File "/home/nd600/miniconda3/envs/vllm/lib/python3.10/site-packages/torch/cuda/__init__.py", line 430, in get_device_capability
(VllmWorkerProcess pid=1866)     prop = get_device_properties(device)
(VllmWorkerProcess pid=1866)   File "/home/nd600/miniconda3/envs/vllm/lib/python3.10/site-packages/torch/cuda/__init__.py", line 444, in get_device_properties
(VllmWorkerProcess pid=1866)     _lazy_init()  # will define _get_device_properties
(VllmWorkerProcess pid=1866)   File "/home/nd600/miniconda3/envs/vllm/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
(VllmWorkerProcess pid=1866)     torch._C._cuda_init()
(VllmWorkerProcess pid=1866) RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory
ERROR 07-04 22:49:34 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 1866 died, exit code: 1
INFO 07-04 22:49:34 multiproc_worker_utils.py:123] Killing local vLLM worker processes
double-vin commented 3 weeks ago

I have also encountered the same problem and look forward to resolving it as soon as possible