vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.71k stars 4.66k forks source link

[Bug]: Phi-3-small-128k-instruct on 1 A100 GPUs - Assertion error: Does not support prefix-enabled attention. #7787

Open congcongchen123 opened 3 months ago

congcongchen123 commented 3 months ago

Your current environment

The output of `python collect_env.py` ```text (myenv) aiscuser@node-0:~/vllm$ python collect_env.py Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.0 Libc version: glibc-2.31 Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-1045-azure-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB Nvidia driver version: 535.86.10 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7V12 64-Core Processor Stepping: 0 CPU MHz: 2445.440 BogoMIPS: 4890.88 Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 3 MiB L1i cache: 3 MiB L2 cache: 48 MiB L3 cache: 384 MiB NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 NUMA node2 CPU(s): 48-71 NUMA node3 CPU(s): 72-95 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru arat umip rdpid Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.20 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.1 [pip3] triton==3.0.0 [conda] No relevant packages ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.5.4@cc0eaf12b1a94bc2fd8d497f6615202699fcf7da vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV12 SYS SYS SYS SYS SYS NODE NODE SYS SYS 24-47 1 N/A GPU1 NV12 X SYS SYS SYS SYS SYS NODE NODE SYS SYS 24-47 1 N/A NIC0 SYS SYS X NODE SYS SYS SYS SYS SYS SYS SYS NIC1 SYS SYS NODE X SYS SYS SYS SYS SYS SYS SYS NIC2 SYS SYS SYS SYS X SYS SYS SYS SYS NODE NODE NIC3 SYS SYS SYS SYS SYS X NODE SYS SYS SYS SYS NIC4 SYS SYS SYS SYS SYS NODE X SYS SYS SYS SYS NIC5 NODE NODE SYS SYS SYS SYS SYS X NODE SYS SYS NIC6 NODE NODE SYS SYS SYS SYS SYS NODE X SYS SYS NIC7 SYS SYS SYS SYS NODE SYS SYS SYS SYS X NODE NIC8 SYS SYS SYS SYS NODE SYS SYS SYS SYS NODE X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 NIC1: mlx5_1 NIC2: mlx5_2 NIC3: mlx5_3 NIC4: mlx5_4 NIC5: mlx5_5 NIC6: mlx5_6 NIC7: mlx5_7 NIC8: mlx5_8 ```

🐛 Describe the bug

  1. start the vLLM server: python -m vllm.entrypoints.openai.api_server --model 'microsoft/Phi-3-small-128k-instruct' --dtype auto --trust-remote-code
  2. from another terminal, send a request to the server: curl http://localhost:8000/v1/completions -H "Content-Type: application/json" -d '{"model": "microsoft/Phi-3-small-128k-instruct","prompt": "Who is the president of the united states?", "max_tokens": 1000,"temperature": 0.2,"top_p": 0.95,"echo": true}'
  3. Server crash with the assertion error below:
    
    INFO 08-22 09:36:23 logger.py:36] Received request cmpl-7836154054e34e87a357c3c0f93d50b1-0: prompt: 'Who is the president of the united states?', params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.2, top_p=0.95, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1000, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [15546, 374, 279, 4872, 315, 279, 29292, 5415, 30], lora_request: None, prompt_adapter_request: None.
    INFO 08-22 09:36:23 async_llm_engine.py:208] Added request cmpl-7836154054e34e87a357c3c0f93d50b1-0.
    DEBUG 08-22 09:36:23 async_llm_engine.py:899] Waiting for new requests...
    DEBUG 08-22 09:36:23 async_llm_engine.py:913] Got new requests!
    ERROR 08-22 09:36:23 async_llm_engine.py:65] Engine background task failed
    ERROR 08-22 09:36:23 async_llm_engine.py:65] Traceback (most recent call last):
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return_value = task.result()
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 930, in run_engine_loop
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     result = task.result()
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 873, in engine_step
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     request_outputs = await self.engine.step_async(virtual_engine)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 337, in step_async
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     output = await self.model_executor.execute_model_async(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/executor/gpu_executor.py", line 178, in execute_model_async
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     output = await make_async(self.driver_worker.execute_model
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.conda/envs/myenv/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     result = self.fn(*self.args, **self.kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/worker/worker_base.py", line 322, in execute_model
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     output = self.model_runner.execute_model(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return func(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/worker/model_runner.py", line 1415, in execute_model
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     hidden_or_intermediate_states = model_executable(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self._call_impl(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return forward_call(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 423, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     output_hidden_states = self.model(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self._call_impl(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return forward_call(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 338, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     hidden_states = layer(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self._call_impl(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return forward_call(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 282, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     hidden_states = self.self_attn(
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self._call_impl(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return forward_call(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 244, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     attn_output = self.attn(q, k, v, kv_cache, attn_metadata=attn_metadata)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self._call_impl(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return forward_call(*args, **kwargs)
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/attention/layer.py", line 98, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     return self.impl.forward(query,
    ERROR 08-22 09:36:23 async_llm_engine.py:65]   File "/home/aiscuser/vllm/vllm/attention/backends/blocksparse_attn.py", line 404, in forward
    ERROR 08-22 09:36:23 async_llm_engine.py:65]     or prefill_meta.block_tables.numel() == 0, \
    ERROR 08-22 09:36:23 async_llm_engine.py:65] AssertionError: Does not support prefix-enabled attention.
    Exception in callback functools.partial(<function _log_task_completion at 0x7f8b05581f30>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f8ae5cad060>>)
    handle: <Handle functools.partial(<function _log_task_completion at 0x7f8b05581f30>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f8ae5cad060>>)>
    Traceback (most recent call last):
    File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
    return_value = task.result()
    File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 930, in run_engine_loop
    result = task.result()
    File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 873, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
    File "/home/aiscuser/vllm/vllm/engine/async_llm_engine.py", line 337, in step_async
    output = await self.model_executor.execute_model_async(
    File "/home/aiscuser/vllm/vllm/executor/gpu_executor.py", line 178, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
    File "/home/aiscuser/.conda/envs/myenv/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
    File "/home/aiscuser/vllm/vllm/worker/worker_base.py", line 322, in execute_model
    output = self.model_runner.execute_model(
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/worker/model_runner.py", line 1415, in execute_model
    hidden_or_intermediate_states = model_executable(
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 423, in forward
    output_hidden_states = self.model(
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 338, in forward
    hidden_states = layer(
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 282, in forward
    hidden_states = self.self_attn(
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/model_executor/models/phi3_small.py", line 244, in forward
    attn_output = self.attn(q, k, v, kv_cache, attn_metadata=attn_metadata)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "/home/aiscuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
    File "/home/aiscuser/vllm/vllm/attention/layer.py", line 98, in forward
    return self.impl.forward(query,
    File "/home/aiscuser/vllm/vllm/attention/backends/blocksparse_attn.py", line 404, in forward
    or prefill_meta.block_tables.numel() == 0, \
    AssertionError: Does not support prefix-enabled attention.
congcongchen123 commented 3 months ago

The vLLM version that works: v0.5.2

congcongchen123 commented 2 months ago

Looking into this bug, I found that chunked prefill is not correctly supported by the block-sparse attention module used by the Phi-3-small-128k-instruct model. And chunked prefill is turned on by default for model that supports >32k context length due to this PR [Misc] Enable chunked prefill by default for long context models (#6666) · microsoft/vllm@729171a (github.com)

A quick fix is to disable chunked prefill by setting --enable-chunked-prefill=False, I will work on a fix for chunked-prefill.

ZetangForward commented 2 weeks ago

Looking into this bug, I found that chunked prefill is not correctly supported by the block-sparse attention module used by the Phi-3-small-128k-instruct model. And chunked prefill is turned on by default for model that supports >32k context length due to this PR [Misc] Enable chunked prefill by default for long context models (#6666) · microsoft/vllm@729171a (github.com)

A quick fix is to disable chunked prefill by setting --enable-chunked-prefill=False, I will work on a fix for chunked-prefill.

but the inference speed is extremely slow...