vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
25.39k stars 3.67k forks source link

[Bug]: v0.4.3 AsyncEngineDeadError #5443

Open changshivek opened 2 months ago

changshivek commented 2 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB

Nvidia driver version: 535.104.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Address sizes:       46 bits physical, 57 bits virtual
Byte Order:          Little Endian
CPU(s):              64
On-line CPU(s) list: 0-63
Vendor ID:           GenuineIntel
Model name:          Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
CPU family:          6
Model:               106
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           2
Stepping:            6
Frequency boost:     enabled
CPU max MHz:         3400.0000
CPU min MHz:         800.0000
BogoMIPS:            5200.00
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization:      VT-x
L1d cache:           3 MiB (64 instances)
L1i cache:           2 MiB (64 instances)
L2 cache:            80 MiB (64 instances)
L3 cache:            96 MiB (2 instances)
NUMA node(s):        2
NUMA node0 CPU(s):   0-31
NUMA node1 CPU(s):   32-63

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] transformers==4.41.2
[pip3] triton==2.3.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV8     SYS     SYS     SYS     PXB     NODE    SYS     32-63   1               N/A
GPU1    NV8      X      SYS     SYS     SYS     PXB     NODE    SYS     32-63   1               N/A
NIC0    SYS     SYS      X      NODE    NODE    SYS     SYS     NODE
NIC1    SYS     SYS     NODE     X      NODE    SYS     SYS     PIX
NIC2    SYS     SYS     NODE    NODE     X      SYS     SYS     NODE
NIC3    PXB     PXB     SYS     SYS     SYS      X      NODE    SYS
NIC4    NODE    NODE    SYS     SYS     SYS     NODE     X      SYS
NIC5    SYS     SYS     NODE    PIX     NODE    SYS     SYS      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_4
  NIC3: mlx5_5
  NIC4: mlx5_6
  NIC5: mlx5_bond_0

🐛 Describe the bug

I use vllm/vllm-openai:v0.4.3 on k8s to deploy a 14b model with config as follows:

        ...
        command: ["/bin/bash", "-c"]
        args: [
        "python3 -m vllm.entrypoints.openai.api_server \
        --host 0.0.0.0 \
        --model <local-model-path> \
        --trust-remote-code \
        --enforce-eager \
        --max-model-len 16384 \
        --gpu-memory-utilization 0.98 \
        --served-model-name ug2rct \
        --tensor-parallel-size 2"
         ]
        ...

after a short period of normal run, this happened:

INFO 06-12 07:32:47 metrics.py:341] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 30 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 15.0%, CPU KV cache usage: 0.0%.
ERROR 06-12 07:32:53 async_llm_engine.py:524] Engine iteration timed out. This should never happen!
ERROR 06-12 07:32:53 async_llm_engine.py:45] Engine background task failed
ERROR 06-12 07:32:53 async_llm_engine.py:45] Traceback (most recent call last):
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 495, in engine_step
ERROR 06-12 07:32:53 async_llm_engine.py:45]     request_outputs = await self.engine.step_async()
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 226, in step_async
ERROR 06-12 07:32:53 async_llm_engine.py:45]     output = await self.model_executor.execute_model_async(
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
ERROR 06-12 07:32:53 async_llm_engine.py:45]     return await self._driver_execute_model_async(execute_model_req)
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 324, in _driver_execute_model_async
ERROR 06-12 07:32:53 async_llm_engine.py:45]     return await self.driver_exec_method("execute_model",
ERROR 06-12 07:32:53 async_llm_engine.py:45] asyncio.exceptions.CancelledError
ERROR 06-12 07:32:53 async_llm_engine.py:45]
ERROR 06-12 07:32:53 async_llm_engine.py:45] During handling of the above exception, another exception occurred:
ERROR 06-12 07:32:53 async_llm_engine.py:45]
ERROR 06-12 07:32:53 async_llm_engine.py:45] Traceback (most recent call last):
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
ERROR 06-12 07:32:53 async_llm_engine.py:45]     return fut.result()
ERROR 06-12 07:32:53 async_llm_engine.py:45] asyncio.exceptions.CancelledError
ERROR 06-12 07:32:53 async_llm_engine.py:45]
ERROR 06-12 07:32:53 async_llm_engine.py:45] The above exception was the direct cause of the following exception:
ERROR 06-12 07:32:53 async_llm_engine.py:45]
ERROR 06-12 07:32:53 async_llm_engine.py:45] Traceback (most recent call last):
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 40, in _raise_exception_on_finish
ERROR 06-12 07:32:53 async_llm_engine.py:45]     task.result()
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 521, in run_engine_loop
ERROR 06-12 07:32:53 async_llm_engine.py:45]     has_requests_in_progress = await asyncio.wait_for(
ERROR 06-12 07:32:53 async_llm_engine.py:45]   File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
ERROR 06-12 07:32:53 async_llm_engine.py:45]     raise exceptions.TimeoutError() from exc
ERROR 06-12 07:32:53 async_llm_engine.py:45] asyncio.exceptions.TimeoutError
ERROR:asyncio:Exception in callback functools.partial(<function _raise_exception_on_finish at 0x7f4a6a428ca0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f4af35782e0>>)
handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x7f4a6a428ca0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f4af35782e0>>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 495, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 226, in step_async
    output = await self.model_executor.execute_model_async(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
    return await self._driver_execute_model_async(execute_model_req)
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/ray_gpu_executor.py", line 324, in _driver_execute_model_async
    return await self.driver_exec_method("execute_model",
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
    return fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 40, in _raise_exception_on_finish
    task.result()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 521, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 47, in _raise_exception_on_finish
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.

I saw the exception message said "This should never happen" but this happened. Anyone with the same issue? Is there any solutions for this?

changshivek commented 2 months ago

update: this error message also shows when using 0.5.0 version of vllm:

ERROR 06-13 02:36:52 async_llm_engine.py:532] Engine iteration timed out. This should never happen!
ERROR 06-13 02:36:52 async_llm_engine.py:52] Engine background task failed
ERROR 06-13 02:36:52 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 503, in engine_step
ERROR 06-13 02:36:52 async_llm_engine.py:52]     request_outputs = await self.engine.step_async()
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
ERROR 06-13 02:36:52 async_llm_engine.py:52]     output = await self.model_executor.execute_model_async(
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
ERROR 06-13 02:36:52 async_llm_engine.py:52]     return await self._driver_execute_model_async(execute_model_req)
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
ERROR 06-13 02:36:52 async_llm_engine.py:52]     return await self.driver_exec_model(execute_model_req)
ERROR 06-13 02:36:52 async_llm_engine.py:52] asyncio.exceptions.CancelledError
ERROR 06-13 02:36:52 async_llm_engine.py:52]
ERROR 06-13 02:36:52 async_llm_engine.py:52] During handling of the above exception, another exception occurred:
ERROR 06-13 02:36:52 async_llm_engine.py:52]
ERROR 06-13 02:36:52 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
ERROR 06-13 02:36:52 async_llm_engine.py:52]     return fut.result()
ERROR 06-13 02:36:52 async_llm_engine.py:52] asyncio.exceptions.CancelledError
ERROR 06-13 02:36:52 async_llm_engine.py:52]
ERROR 06-13 02:36:52 async_llm_engine.py:52] The above exception was the direct cause of the following exception:
ERROR 06-13 02:36:52 async_llm_engine.py:52]
ERROR 06-13 02:36:52 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
ERROR 06-13 02:36:52 async_llm_engine.py:52]     return_value = task.result()
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 529, in run_engine_loop
ERROR 06-13 02:36:52 async_llm_engine.py:52]     has_requests_in_progress = await asyncio.wait_for(
ERROR 06-13 02:36:52 async_llm_engine.py:52]   File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
ERROR 06-13 02:36:52 async_llm_engine.py:52]     raise exceptions.TimeoutError() from exc
ERROR 06-13 02:36:52 async_llm_engine.py:52] asyncio.exceptions.TimeoutError
ERROR:asyncio:Exception in callback functools.partial(<function _log_task_completion at 0x7fcdf0cc3ac0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fcea3705930>>)
handle: <Handle functools.partial(<function _log_task_completion at 0x7fcdf0cc3ac0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fcea3705930>>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 503, in engine_step
    request_outputs = await self.engine.step_async()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
    output = await self.model_executor.execute_model_async(
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
    return await self._driver_execute_model_async(execute_model_req)
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
    return await self.driver_exec_model(execute_model_req)
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
    return fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
    return_value = task.result()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 529, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 54, in _log_task_completion
INFO 06-13 02:36:52 async_llm_engine.py:167] Aborted request cmpl-5b8a3cb3d04d41babc4903e537e39544.
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for theactual cause.
ZZhangxian commented 2 months ago

How do you solve this problem, I have the same problem as you

DarkLight1337 commented 2 months ago

Ref. #5732

somelaoda commented 2 months ago

How do you solve this problem, I have the same problem as you

robertgshaw2-neuralmagic commented 1 month ago

I'm looking into the AsyncEngineDeadError issues under #5901

Could you please share the following so that I can reproduce the issue:

changshivek commented 1 month ago

I launched vllm inference server on kubernets, here is my launch command

      containers:
      - name: qw2-vllm #自定义名字
        image: harbor.myharbor.cn/myrepo/vllm-openai:v0.5.0 #推理镜像的名字
        ports:
        - containerPort: 8000
        # securityContext:
        #   privileged: true
        # command: ["/bin/sh","-c", "sleep 1d"] #推理启动的脚本
        command: ["/bin/bash", "-c"] #推理启动的脚本
        args: [
        # "sudo sed -i '175,+2s/\"dns.google\"/\"8.8.8.8\"/g'
/workspace/vllm/utils.py && \
        "nvidia-smi;python3 -m vllm.entrypoints.openai.api_server \
        --host 0.0.0.0 \
        --model /fl/nlp/common/plms/qwen2/Qwen2-72B-Instruct \
        --trust-remote-code \
        --enforce-eager \
        --max-model-len 32768 \
        --gpu-memory-utilization 0.9 \
        --served-model-name qwen2-72bc \
        --tensor-parallel-size 8"
         ]
        resources:
          limits:
            nvidia.com/gpu: 8 #必须填写,每个节点使用的 GPU 的数量

since this bug seems not being triggered by specific cases, I am unable to provide such cases, sorry

Robert Shaw @.***> 于2024年6月27日周四 19:54写道:

I'm looking into the AsyncEngineDeadError issues under #5901 https://github.com/vllm-project/vllm/issues/5901

Could you please share the following so that I can reproduce the issue:

  • The server launch command used
  • A sample request that causes the error

— Reply to this email directly, view it on GitHub https://github.com/vllm-project/vllm/issues/5443#issuecomment-2194482043, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFST5AJGBRLECCBNDHC3V3DZJP4PHAVCNFSM6AAAAABJFZKTTKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJUGQ4DEMBUGM . You are receiving this because you authored the thread.Message ID: @.***>