vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
25.36k stars 3.67k forks source link

[Bug]: AsyncEngineDeadError: Task finished unexpectedly with qwen2 72b #6208

Open thomZ1 opened 1 month ago

thomZ1 commented 1 month ago

Your current environment

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.31

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.119-19-0009.11-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB

Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          232
On-line CPU(s) list:             0-231
Thread(s) per core:              2
Core(s) per socket:              58
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           1
Model name:                      AMD EPYC 7K83 64-Core Processor
Stepping:                        1
CPU MHz:                         2445.404
BogoMIPS:                        4890.80
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       3.6 MiB
L1i cache:                       3.6 MiB
L2 cache:                        58 MiB
L3 cache:                        448 MiB
NUMA node0 CPU(s):               0-115
NUMA node1 CPU(s):               116-231
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat umip pku ospke vaes vpclmulqdq rdpid fsrm

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnxruntime==1.18.0
[pip3] sentence-transformers==3.0.0
[pip3] torch==2.3.0
[pip3] transformers==4.41.2
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] sentence-transformers     3.0.0                    pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] transformers              4.41.2                   pypi_0    pypi
[conda] transformers-stream-generator 0.0.5                    pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV8     NV8     NV8     NV8     NV8     NV8     NV8     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     0-115   0               N/A
GPU1    NV8      X      NV8     NV8     NV8     NV8     NV8     NV8     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     0-115   0               N/A
GPU2    NV8     NV8      X      NV8     NV8     NV8     NV8     NV8     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     0-115   0               N/A
GPU3    NV8     NV8     NV8      X      NV8     NV8     NV8     NV8     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     0-115   0               N/A
GPU4    NV8     NV8     NV8     NV8      X      NV8     NV8     NV8     SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     116-231 1               N/A
GPU5    NV8     NV8     NV8     NV8     NV8      X      NV8     NV8     SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     116-231 1               N/A
GPU6    NV8     NV8     NV8     NV8     NV8     NV8      X      NV8     SYS     SYS     SYS     SYS     SYS     SYS     PIX     PIX     116-231 1               N/A
GPU7    NV8     NV8     NV8     NV8     NV8     NV8     NV8      X      SYS     SYS     SYS     SYS     SYS     SYS     PIX     PIX     116-231 1               N/A
NIC0    PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS     SYS     SYS     SYS     SYS
NIC1    PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS     SYS     SYS     SYS     SYS
NIC2    SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS     SYS     SYS
NIC3    SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS     SYS     SYS
NIC4    SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS
NIC5    SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS
NIC6    SYS     SYS     SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX
NIC7    SYS     SYS     SYS     SYS     SYS     SYS     PIX     PIX     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_bond_0
  NIC1: mlx5_bond_1
  NIC2: mlx5_bond_2
  NIC3: mlx5_bond_3
  NIC4: mlx5_bond_4
  NIC5: mlx5_bond_5
  NIC6: mlx5_bond_6
  NIC7: mlx5_bond_7

🐛 Describe the bug

I am using the eval-scope to test the concurrent throughput of the Qwen2 72B Instruct model deployed with VLLM. When running with 8 concurrent sessions, inputting 8k tokens, and outputting 2k tokens for a period of time, the VLLM service becomes inaccessible.

https://github.com/modelscope/eval-scope/tree/main:

llmuses perf --url 'http://localhost:8000/v1/chat/completions' --parallel 8 --model 'qwen' --log-every-n-query 10 --read-timeout=1000 --connect-timeout=1000 -n 500 --max-prompt-length 100000 \
                --tokenizer-path '/mnt/cfs_standard/Model/Qwen2/Qwen2-72B-Instruct/' --api openai --query-template '{"model": "%m", "messages": [{"role": "user","content": "%p"}], "stream": true,"skip_special_tokens": false,"max_tokens":1600,"stop": [], "early_stopping":false ,"ignore_eos":false,"repetition_penalty":0.25}' \
                --dataset openqa --dataset-path '/mnt/cfs_standard/User/zhangyi/Inference/lbl_m16000.jsonl' --name 'vllm-perf-qwen-72b-p8-16k' --wandb-api-key 'c6ff0fb6067a9a68e8be256b29c7f3b69b01b8ce'
INFO 07-07 15:20:11 metrics.py:341] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 8 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 3.5%, CPU KV cache usage: 0.0%.
ERROR 07-07 15:20:11 async_llm_engine.py:535] Engine iteration timed out. This should never happen!
ERROR 07-07 15:20:11 async_llm_engine.py:52] Engine background task failed
ERROR 07-07 15:20:11 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 506, in engine_step
ERROR 07-07 15:20:11 async_llm_engine.py:52]     request_outputs = await self.engine.step_async()
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
ERROR 07-07 15:20:11 async_llm_engine.py:52]     output = await self.model_executor.execute_model_async(
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
ERROR 07-07 15:20:11 async_llm_engine.py:52]     return await self._driver_execute_model_async(execute_model_req)
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
ERROR 07-07 15:20:11 async_llm_engine.py:52]     return await self.driver_exec_model(execute_model_req)
ERROR 07-07 15:20:11 async_llm_engine.py:52] asyncio.exceptions.CancelledError
ERROR 07-07 15:20:11 async_llm_engine.py:52] 
ERROR 07-07 15:20:11 async_llm_engine.py:52] During handling of the above exception, another exception occurred:
ERROR 07-07 15:20:11 async_llm_engine.py:52] 
ERROR 07-07 15:20:11 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
ERROR 07-07 15:20:11 async_llm_engine.py:52]     return fut.result()
ERROR 07-07 15:20:11 async_llm_engine.py:52] asyncio.exceptions.CancelledError
ERROR 07-07 15:20:11 async_llm_engine.py:52] 
ERROR 07-07 15:20:11 async_llm_engine.py:52] The above exception was the direct cause of the following exception:
ERROR 07-07 15:20:11 async_llm_engine.py:52] 
ERROR 07-07 15:20:11 async_llm_engine.py:52] Traceback (most recent call last):
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
ERROR 07-07 15:20:11 async_llm_engine.py:52]     return_value = task.result()
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 532, in run_engine_loop
ERROR 07-07 15:20:11 async_llm_engine.py:52]     has_requests_in_progress = await asyncio.wait_for(
ERROR 07-07 15:20:11 async_llm_engine.py:52]   File "/root/miniconda3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
ERROR 07-07 15:20:11 async_llm_engine.py:52]     raise exceptions.TimeoutError() from exc
ERROR 07-07 15:20:11 async_llm_engine.py:52] asyncio.exceptions.TimeoutError
Exception in callback functools.partial(<function _log_task_completion at 0x7f725fb83880>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f73e6b29600>>)
handle: <Handle functools.partial(<function _log_task_completion at 0x7f725fb83880>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f73e6b29600>>)>
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 506, in engine_step
    request_outputs = await self.engine.step_async()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
    output = await self.model_executor.execute_model_async(
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
    return await self._driver_execute_model_async(execute_model_req)
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
    return await self.driver_exec_model(execute_model_req)
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
    return fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
    return_value = task.result()
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 532, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/root/miniconda3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/root/miniconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 54, in _log_task_completion
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for theactual cause.
xbinglzh commented 1 month ago

ERROR 07-10 11:08:14 async_llm_engine.py:483] Engine iteration timed out. This should never happen! ERROR 07-10 11:08:14 async_llm_engine.py:43] Engine background task failed ERROR 07-10 11:08:14 async_llm_engine.py:43] Traceback (most recent call last): ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 454, in engine_step ERROR 07-10 11:08:14 async_llm_engine.py:43] request_outputs = await self.engine.step_async() ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async ERROR 07-10 11:08:14 async_llm_engine.py:43] output = await self.model_executor.execute_model_async( ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 418, in execute_model_async ERROR 07-10 11:08:14 async_llm_engine.py:43] all_outputs = await self._run_workers_async( ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 408, in _run_workers_async ERROR 07-10 11:08:14 async_llm_engine.py:43] all_outputs = await asyncio.gather(coros) ERROR 07-10 11:08:14 async_llm_engine.py:43] asyncio.exceptions.CancelledError ERROR 07-10 11:08:14 async_llm_engine.py:43] ERROR 07-10 11:08:14 async_llm_engine.py:43] During handling of the above exception, another exception occurred: ERROR 07-10 11:08:14 async_llm_engine.py:43] ERROR 07-10 11:08:14 async_llm_engine.py:43] Traceback (most recent call last): ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for ERROR 07-10 11:08:14 async_llm_engine.py:43] return fut.result() ERROR 07-10 11:08:14 async_llm_engine.py:43] asyncio.exceptions.CancelledError ERROR 07-10 11:08:14 async_llm_engine.py:43] ERROR 07-10 11:08:14 async_llm_engine.py:43] The above exception was the direct cause of the following exception: ERROR 07-10 11:08:14 async_llm_engine.py:43] ERROR 07-10 11:08:14 async_llm_engine.py:43] Traceback (most recent call last): ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish ERROR 07-10 11:08:14 async_llm_engine.py:43] task.result() ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 480, in run_engine_loop ERROR 07-10 11:08:14 async_llm_engine.py:43] has_requests_in_progress = await asyncio.wait_for( ERROR 07-10 11:08:14 async_llm_engine.py:43] File "/home/ray/anaconda3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for ERROR 07-10 11:08:14 async_llm_engine.py:43] raise exceptions.TimeoutError() from exc ERROR 07-10 11:08:14 async_llm_engine.py:43] asyncio.exceptions.TimeoutError 2024-07-10 11:08:14,573 - asyncio - ERROR - Exception in callback functools.partial(<function _raise_exception_on_finish at 0x7fcf686829e0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fd15e5da680>>) handle: <Handle functools.partial(<function _raise_exception_on_finish at 0x7fcf686829e0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7fd15e5da680>>)> Traceback (most recent call last): File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 454, in engine_step request_outputs = await self.engine.step_async() File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 213, in step_async output = await self.model_executor.execute_model_async( File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 418, in execute_model_async all_outputs = await self._run_workers_async( File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/executor/ray_gpu_executor.py", line 408, in _run_workers_async all_outputs = await asyncio.gather(coros) asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/ray/anaconda3/lib/python3.10/asyncio/tasks.py", line 456, in wait_for return fut.result() asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish task.result() File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 480, in run_engine_loop has_requests_in_progress = await asyncio.wait_for( File "/home/ray/anaconda3/lib/python3.10/asyncio/tasks.py", line 458, in wait_for raise exceptions.TimeoutError() from exc asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run File "/home/ray/anaconda3/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 45, in _raise_exception_on_finish raise AsyncEngineDeadError( vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.

Lzhang-hub commented 1 month ago

+1

jessiewiswjc commented 1 month ago

+1

wangye360 commented 1 month ago

+1

DarkLight1337 commented 1 month ago

We have a tracking issue (#5901) for this. Please provide more details there so we can better troubleshoot the underlying cause.

HaIouya commented 1 month ago

when i used the glm4-9b-int8 and qwen2-72b-int4, i met this problem too.

bltcn commented 1 month ago

me too.Once it happen "Engine iteration timed out. This should never happen!", all the server will never response.