vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.47k stars 4.61k forks source link

[Bug]: `flash_attn_cuda.varlen_fwd` may output a bad result when enabling prefix caching #5678

Open syGOAT opened 5 months ago

syGOAT commented 5 months ago

Your current environment

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20

Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      52 bits physical, 57 bits virtual
CPU(s):                             180
On-line CPU(s) list:                0-179
Thread(s) per core:                 2
Core(s) per socket:                 45
Socket(s):                          2
NUMA node(s):                       2
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                              143
Model name:                         Intel(R) Xeon(R) Platinum 8457C
Stepping:                           8
CPU MHz:                            2600.000
BogoMIPS:                           5200.00
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          4.2 MiB
L1i cache:                          2.8 MiB
L2 cache:                           180 MiB
L3 cache:                           195 MiB
NUMA node0 CPU(s):                  0-89
NUMA node1 CPU(s):                  90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Unknown: No mitigations
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU1    SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU2    SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU3    SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU4    SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     90-179  1               N/A
GPU5    SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     90-179  1               N/A
GPU6    SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     90-179  1               N/A
GPU7    SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     90-179  1               N/A
NIC0    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

My command:

python -m vllm.entrypoints.openai.api_server --model /root/autodl-tmp/model/Meta-Llama-3-70B-Instruct --tensor-parallel-size 8 --port 8000 --served-model-name gpt-4 --distributed-executor-backend mp --enable-prefix-caching

The engine started. When some requests posted (maybe guided json request), something went wrong:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 265, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    await func()
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
    message = await receive()
              ^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
    await self.message_event.wait()
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/asyncio/locks.py", line 213, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f46fc926c50

During handling of the above exception, another exception occurred:

  + Exception Group Traceback (most recent call last):
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
  |     result = await app(  # type: ignore[func-returns-value]
  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
  |     return await self.app(scope, receive, send)
  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
  |     await super().__call__(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
  |     await self.app(scope, receive, _send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
  |     await self.app(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
  |     await route.handle(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
  |     await self.app(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
  |     await response(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 258, in __call__
  |     async with anyio.create_task_group() as task_group:
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    |     await func()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/entrypoints/openai/serving_chat.py", line 311, in chat_completion_stream_generator
    |     async for res in result_generator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 670, in generate
    |     async for output in self._process_request(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 777, in _process_request
    |     raise e
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 773, in _process_request
    |     async for request_output in stream:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 89, in __anext__
    |     raise result
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
    |     return_value = task.result()
    |                    ^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 529, in run_engine_loop
    |     has_requests_in_progress = await asyncio.wait_for(
    |                                ^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/asyncio/tasks.py", line 489, in wait_for
    |     return fut.result()
    |            ^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 503, in engine_step
    |     request_outputs = await self.engine.step_async()
    |                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
    |     output = await self.model_executor.execute_model_async(
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
    |     return await self._driver_execute_model_async(execute_model_req)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
    |     return await self.driver_exec_model(execute_model_req)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    |     result = self.fn(*self.args, **self.kwargs)
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    |     return func(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/worker/worker.py", line 272, in execute_model
    |     output = self.model_runner.execute_model(seq_group_metadata_list,
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    |     return func(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 738, in execute_model
    |     hidden_states = model_executable(
    |                     ^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 371, in forward
    |     hidden_states = self.model(input_ids, positions, kv_caches,
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 288, in forward
    |     hidden_states, residual = layer(
    |                               ^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 227, in forward
    |     hidden_states = self.self_attn(
    |                     ^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 161, in forward
    |     attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
    |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/attention/layer.py", line 89, in forward
    |     return self.impl.forward(query, key, value, kv_cache, attn_metadata,
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/attention/backends/flash_attn.py", line 338, in forward
    |     flash_attn_varlen_func(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 1099, in flash_attn_varlen_func
    |     return FlashAttnVarlenFunc.apply(
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/autograd/function.py", line 598, in apply
    |     return super().apply(*args, **kwargs)  # type: ignore[misc]
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 596, in forward
    |     out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward(
    |                                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 88, in _flash_attn_varlen_forward
    |     out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
    |                                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | RuntimeError: out must have shape (total_q, num_heads, head_size_og)
    | 
    | The above exception was the direct cause of the following exception:
    | 
    | Traceback (most recent call last):
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    |     await func()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/entrypoints/openai/serving_chat.py", line 311, in chat_completion_stream_generator
    |     async for res in result_generator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 670, in generate
    |     async for output in self._process_request(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 764, in _process_request
    |     stream = await self.add_request(
    |              ^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 569, in add_request
    |     self.start_background_loop()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 440, in start_background_loop
    |     raise AsyncEngineDeadError(
    | vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.
    +------------------------------------
youkaichao commented 5 months ago

cc @Yard1

mpoemsl commented 5 months ago

I also reported a similar problem in #5537

codevoyager1984 commented 4 months ago

Same issue here, any progress 👀 ?

chenchunhui97 commented 3 months ago

same issue

zjjznw123 commented 3 months ago

Your current environment

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.31

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
GPU 4: NVIDIA L20
GPU 5: NVIDIA L20
GPU 6: NVIDIA L20
GPU 7: NVIDIA L20

Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      52 bits physical, 57 bits virtual
CPU(s):                             180
On-line CPU(s) list:                0-179
Thread(s) per core:                 2
Core(s) per socket:                 45
Socket(s):                          2
NUMA node(s):                       2
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                              143
Model name:                         Intel(R) Xeon(R) Platinum 8457C
Stepping:                           8
CPU MHz:                            2600.000
BogoMIPS:                           5200.00
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          4.2 MiB
L1i cache:                          2.8 MiB
L2 cache:                           180 MiB
L3 cache:                           195 MiB
NUMA node0 CPU(s):                  0-89
NUMA node1 CPU(s):                  90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Unknown: No mitigations
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU1    SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU2    SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU3    SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     0-89    0               N/A
GPU4    SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     90-179  1               N/A
GPU5    SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     90-179  1               N/A
GPU6    SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     90-179  1               N/A
GPU7    SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     90-179  1               N/A
NIC0    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

My command:

python -m vllm.entrypoints.openai.api_server --model /root/autodl-tmp/model/Meta-Llama-3-70B-Instruct --tensor-parallel-size 8 --port 8000 --served-model-name gpt-4 --distributed-executor-backend mp --enable-prefix-caching

The engine started. When some requests posted (maybe guided json request), something went wrong:

�[31mERROR�[0m:    Exception in ASGI application
Traceback (most recent call last):
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 265, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    await func()
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 238, in listen_for_disconnect
    message = await receive()
              ^^^^^^^^^^^^^^^
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 568, in receive
    await self.message_event.wait()
  File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/asyncio/locks.py", line 213, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f46fc926c50

During handling of the above exception, another exception occurred:

  + Exception Group Traceback (most recent call last):
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
  |     result = await app(  # type: ignore[func-returns-value]
  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
  |     return await self.app(scope, receive, send)
  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
  |     await super().__call__(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
  |     await self.app(scope, receive, _send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
  |     await self.app(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
  |     await route.handle(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
  |     await self.app(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/routing.py", line 75, in app
  |     await response(scope, receive, send)
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 258, in __call__
  |     async with anyio.create_task_group() as task_group:
  |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 678, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    |     await func()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/entrypoints/openai/serving_chat.py", line 311, in chat_completion_stream_generator
    |     async for res in result_generator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 670, in generate
    |     async for output in self._process_request(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 777, in _process_request
    |     raise e
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 773, in _process_request
    |     async for request_output in stream:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 89, in __anext__
    |     raise result
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 42, in _log_task_completion
    |     return_value = task.result()
    |                    ^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 529, in run_engine_loop
    |     has_requests_in_progress = await asyncio.wait_for(
    |                                ^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/asyncio/tasks.py", line 489, in wait_for
    |     return fut.result()
    |            ^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 503, in engine_step
    |     request_outputs = await self.engine.step_async()
    |                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 235, in step_async
    |     output = await self.model_executor.execute_model_async(
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/executor/distributed_gpu_executor.py", line 166, in execute_model_async
    |     return await self._driver_execute_model_async(execute_model_req)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/executor/multiproc_gpu_executor.py", line 149, in _driver_execute_model_async
    |     return await self.driver_exec_model(execute_model_req)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/concurrent/futures/thread.py", line 58, in run
    |     result = self.fn(*self.args, **self.kwargs)
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    |     return func(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/worker/worker.py", line 272, in execute_model
    |     output = self.model_runner.execute_model(seq_group_metadata_list,
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    |     return func(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 738, in execute_model
    |     hidden_states = model_executable(
    |                     ^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 371, in forward
    |     hidden_states = self.model(input_ids, positions, kv_caches,
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 288, in forward
    |     hidden_states, residual = layer(
    |                               ^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 227, in forward
    |     hidden_states = self.self_attn(
    |                     ^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 161, in forward
    |     attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
    |                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    |     return self._call_impl(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    |     return forward_call(*args, **kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/attention/layer.py", line 89, in forward
    |     return self.impl.forward(query, key, value, kv_cache, attn_metadata,
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/attention/backends/flash_attn.py", line 338, in forward
    |     flash_attn_varlen_func(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 1099, in flash_attn_varlen_func
    |     return FlashAttnVarlenFunc.apply(
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/torch/autograd/function.py", line 598, in apply
    |     return super().apply(*args, **kwargs)  # type: ignore[misc]
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 596, in forward
    |     out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_varlen_forward(
    |                                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm_flash_attn/flash_attn_interface.py", line 88, in _flash_attn_varlen_forward
    |     out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.varlen_fwd(
    |                                                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    | RuntimeError: out must have shape (total_q, num_heads, head_size_og)
    | 
    | The above exception was the direct cause of the following exception:
    | 
    | Traceback (most recent call last):
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 261, in wrap
    |     await func()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/starlette/responses.py", line 250, in stream_response
    |     async for chunk in self.body_iterator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/entrypoints/openai/serving_chat.py", line 311, in chat_completion_stream_generator
    |     async for res in result_generator:
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 670, in generate
    |     async for output in self._process_request(
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 764, in _process_request
    |     stream = await self.add_request(
    |              ^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 569, in add_request
    |     self.start_background_loop()
    |   File "/root/autodl-tmp/miniconda3/envs/vllm-py311/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 440, in start_background_loop
    |     raise AsyncEngineDeadError(
    | vllm.engine.async_llm_engine.AsyncEngineDeadError: Background loop has errored already.
    +------------------------------------

How was this solved? thank you

github-actions[bot] commented 6 days ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!