vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.04k stars 4.72k forks source link

[Bug]: Error executing method start_worker_execution_loop. This might cause deadlock in distributed execution. #7046

Open ktrapeznikov opened 4 months ago

ktrapeznikov commented 4 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.31

Python version: 3.10.14 (main, Apr  6 2024, 18:45:05) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-189-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000

Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      43 bits physical, 48 bits virtual
CPU(s):                             256
On-line CPU(s) list:                0-254
Off-line CPU(s) list:               255
Thread(s) per core:                 1
Core(s) per socket:                 64
Socket(s):                          2
NUMA node(s):                       2
Vendor ID:                          AuthenticAMD
CPU family:                         23
Model:                              49
Model name:                         AMD EPYC 7H12 64-Core Processor
Stepping:                           0
Frequency boost:                    enabled
CPU MHz:                            1460.043
CPU max MHz:                        2600.0000
CPU min MHz:                        1500.0000
BogoMIPS:                           5200.28
Virtualization:                     AMD-V
L1d cache:                          2 MiB
L1i cache:                          2 MiB
L2 cache:                           32 MiB
L3 cache:                           256 MiB
NUMA node0 CPU(s):                  0-63,128-191
NUMA node1 CPU(s):                  64-127,192-254
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Vulnerable
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.43.1
[pip3] triton==2.3.1
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    SYS     SYS     0-63,128-191    0               N/A
GPU1    NODE     X      SYS     SYS     0-63,128-191    0               N/A
NIC0    SYS     SYS      X      PIX
NIC1    SYS     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1```

### 🐛 Describe the bug

When I run several concurrent requests against `v1/chat/completion` API endpoint. I get an error.

```python
def run_image(image_file, prompt=None, **kwargs):

    image_url = get_image_url(image_file)
    gen_kwargs = dict(max_tokens=256,
        top_p=.9,
        temperature=.6,
        model="llava-hf/llava-v1.6-34b-hf")

    gen_kwargs.update(kwargs)

    chat = dict(
        messages=[{
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": {
                        "url": image_url,
                    },
                },
            ],
        }],
    )

    gen_kwargs.update(chat)

    out = requests.post(f"server_address/v1/chat/completions", json = gen_kwargs )

    return out.json()

from concurrent.futures import ThreadPoolExecutor, as_completed

output = [None]*len(probe_files)
fs = {}
with ThreadPoolExecutor(max_workers=16) as E:
    for i,f in enumerate(probe_files):
        fs[E.submit(run_image,f,prompt = prompt, guided_json = schema)] = i

    for f in tqdm.tqdm(as_completed(fs),total = len(fs)):
        try:
            output[fs[f]] = f.result()
        except:
            print(probe_files[fs[f]])

I get the following in the vLLM server logs:

vllm-1  | INFO 08-01 20:15:27 async_llm_engine.py:173] Added request chat-89f347687fde4ff9841e322673fc42e5.
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382] Error executing method start_worker_execution_loop. This might cause deadlock in distributed execution.
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382] Traceback (most recent call last):
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 374, in execute_method
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return executor(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return func(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 65, in start_worker_execution_loop
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     output = self.execute_model(execute_model_req=None)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker_base.py", line 272, in execute_model
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     output = self.model_runner.execute_model(
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return func(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1314, in execute_model
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     hidden_or_intermediate_states = model_executable(
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._call_impl(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return forward_call(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/llava_next.py", line 502, in forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     vision_embeddings = self._process_image_input(image_input)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/llava_next.py", line 429, in _process_image_input
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     patch_embeddings = self._process_image_pixels(image_input)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/llava_next.py", line 419, in _process_image_pixels
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     stacked_image_features = self._image_pixels_to_features(
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/llava_next.py", line 325, in _image_pixels_to_features
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     image_features = vision_tower(pixel_values)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._call_impl(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return forward_call(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/clip.py", line 289, in forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self.vision_model(pixel_values=pixel_values)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._call_impl(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return forward_call(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/clip.py", line 265, in forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     hidden_states = self.embeddings(pixel_values)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._call_impl(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return forward_call(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/clip.py", line 140, in forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     patch_embeds = self.patch_embedding(pixel_values.to(
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._call_impl(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return forward_call(*args, **kwargs)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 460, in forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return self._conv_forward(input, self.weight, self.bias)
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382]     return F.conv2d(input, weight, bias, self.stride,
vllm-1  | (RayWorkerWrapper pid=14379) ERROR 08-01 20:15:29 worker_base.py:382] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:658] Engine iteration timed out. This should never happen!
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] Engine background task failed
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] Traceback (most recent call last):
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 631, in run_engine_loop
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     done, _ = await asyncio.wait(
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/lib/python3.10/asyncio/tasks.py", line 384, in wait
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     return await _wait(fs, timeout, return_when, loop)
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/lib/python3.10/asyncio/tasks.py", line 491, in _wait
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     await waiter
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] asyncio.exceptions.CancelledError
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] 
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] During handling of the above exception, another exception occurred:
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] 
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] Traceback (most recent call last):
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 46, in _log_task_completion
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     return_value = task.result()
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 630, in run_engine_loop
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     async with asyncio_timeout(ENGINE_ITERATION_TIMEOUT_S):
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_timeout.py", line 95, in __aexit__
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     self._do_exit(exc_type)
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_timeout.py", line 178, in _do_exit
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56]     raise asyncio.TimeoutError
vllm-1  | ERROR 08-01 20:16:29 async_llm_engine.py:56] asyncio.exceptions.TimeoutError
github-actions[bot] commented 4 weeks ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!