vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.79k stars 4.5k forks source link

[Bug]: "Prompt logprob is not supported by multi step workers" for ngram speculative decoding #6306

Closed ccdv-ai closed 2 months ago

ccdv-ai commented 4 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.6
Libc version: glibc-2.35

Python version: 3.9.19 (main, May  6 2024, 19:43:03)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40

Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             64
On-line CPU(s) list:                0-63
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 9124 16-Core Processor
CPU family:                         25
Model:                              17
Thread(s) per core:                 2
Core(s) per socket:                 16
Socket(s):                          2
Stepping:                           1
BogoMIPS:                           5990.87
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                     AMD-V
L1d cache:                          1 MiB (32 instances)
L1i cache:                          1 MiB (32 instances)
L2 cache:                           32 MiB (32 instances)
L3 cache:                           128 MiB (8 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-15,32-47
NUMA node1 CPU(s):                  16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] blas                      1.0                         mkl  
[conda] mkl                       2023.1.0         h213fc3f_46344  
[conda] mkl-service               2.4.0            py39h5eee18b_1  
[conda] mkl_fft                   1.3.8            py39h5eee18b_0  
[conda] mkl_random                1.2.4            py39hdb19cb5_0  
[conda] numpy                     1.26.4           py39h5f9d8c6_0  
[conda] numpy-base                1.26.4           py39hb5e798b_0  
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] torchvision               0.18.0                   pypi_0    pypi
[conda] transformers              4.42.3                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    SYS     SYS     0-15,32-47      0               N/A
GPU1    NODE     X      SYS     SYS     0-15,32-47      0               N/A
GPU2    SYS     SYS      X      NODE    16-31,48-63     1               N/A
GPU3    SYS     SYS     NODE     X      16-31,48-63     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

Trying to use ngram speculative decoding (v0.5.1) but fails with: WARNING 07-10 13:24:34 multi_step.py:57] Prompt logprob is not supported by multi step workers. (e.g., speculative decode uses multi step workers)

python -u -m vllm.entrypoints.openai.api_server \
    --host 0.0.0.0 \
    --model models/mixtral-instruct-awq \
    --dtype "auto" \
    --port 8002 \
    --seed 123 \
    --quantization awq \
    --max-model-len 32768 \
    --tensor-parallel-size 1 \
    --gpu-memory-utilization 0.95 \
    --speculative_model "[ngram]" \
    --num_speculative_tokens 5 \
    --ngram_prompt_lookup_max 6 \
    --use-v2-block-manager \
    --max-num-seqs 16 \
    --served-model-name mixtral

Full error:

INFO 07-10 13:24:13 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 7 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 21.7%, CPU KV cache usage: 0.0%.
INFO 07-10 13:24:23 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 7 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 21.7%, CPU KV cache usage: 0.0%.
INFO 07-10 13:24:33 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 7 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 21.7%, CPU KV cache usage: 0.0%.
WARNING 07-10 13:24:34 multi_step.py:57] Prompt logprob is not supported by multi step workers. (e.g., speculative decode uses multi step workers).
ERROR 07-10 13:24:37 async_llm_engine.py:53] Engine background task failed
ERROR 07-10 13:24:37 async_llm_engine.py:53] Traceback (most recent call last):
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 43, in _log_task_completion
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return_value = task.result()
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 595, in run_engine_loop
ERROR 07-10 13:24:37 async_llm_engine.py:53]     result = task.result()
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 540, in engine_step
ERROR 07-10 13:24:37 async_llm_engine.py:53]     request_outputs = await self.engine.step_async(virtual_engine)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 241, in step_async
ERROR 07-10 13:24:37 async_llm_engine.py:53]     output = await self.model_executor.execute_model_async(
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 122, in execute_model_async
ERROR 07-10 13:24:37 async_llm_engine.py:53]     output = await make_async(self.driver_worker.execute_model
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/concurrent/futures/thread.py", line 58, in run
ERROR 07-10 13:24:37 async_llm_engine.py:53]     result = self.fn(*self.args, **self.kwargs)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return func(*args, **kwargs)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/spec_decode_worker.py", line 341, in execute_model
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return self._run_speculative_decoding_step(execute_model_req,
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/contextlib.py", line 79, in inner
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return func(*args, **kwds)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/spec_decode_worker.py", line 453, in _run_speculative_decoding_step
ERROR 07-10 13:24:37 async_llm_engine.py:53]     proposal_scores = self.scorer.score_proposals(
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/contextlib.py", line 79, in inner
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return func(*args, **kwds)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/batch_expansion.py", line 80, in score_proposals
ERROR 07-10 13:24:37 async_llm_engine.py:53]     target_sampler_output = self._scorer_worker.execute_model(
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 271, in execute_model
ERROR 07-10 13:24:37 async_llm_engine.py:53]     output = self.model_runner.execute_model(
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
ERROR 07-10 13:24:37 async_llm_engine.py:53]     return func(*args, **kwargs)
ERROR 07-10 13:24:37 async_llm_engine.py:53]   File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1233, in execute_model
ERROR 07-10 13:24:37 async_llm_engine.py:53]     model_executable = self.graph_runners[virtual_engine][
ERROR 07-10 13:24:37 async_llm_engine.py:53] KeyError: 48
Exception in callback functools.partial(<function _log_task_completion at 0x7faa8e191040>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7faa71ac1190>>)
handle: <Handle functools.partial(<function _log_task_completion at 0x7faa8e191040>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7faa71ac1190>>)>
Traceback (most recent call last):
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 43, in _log_task_completion
    return_value = task.result()
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 595, in run_engine_loop
    result = task.result()
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 540, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 241, in step_async
    output = await self.model_executor.execute_model_async(
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/executor/gpu_executor.py", line 122, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/opt/anaconda/envs/transformers/lib/python3.9/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/spec_decode_worker.py", line 341, in execute_model
    return self._run_speculative_decoding_step(execute_model_req,
  File "/opt/anaconda/envs/transformers/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/spec_decode_worker.py", line 453, in _run_speculative_decoding_step
    proposal_scores = self.scorer.score_proposals(
  File "/opt/anaconda/envs/transformers/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/spec_decode/batch_expansion.py", line 80, in score_proposals
    target_sampler_output = self._scorer_worker.execute_model(
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/worker/worker_base.py", line 271, in execute_model
    output = self.model_runner.execute_model(
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1233, in execute_model
    model_executable = self.graph_runners[virtual_engine][
KeyError: 48
tempcollab commented 2 months ago

facing the same issue

tjohnson31415 commented 2 months ago

The message about prompt logprobs not being supported is just a warning. The crash cause here is the KeyError:

  File "/opt/anaconda/envs/transformers/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 1233, in execute_model
    model_executable = self.graph_runners[virtual_engine][
KeyError: 48

So I believe this issue is the same as https://github.com/vllm-project/vllm/issues/7907

youkaichao commented 2 months ago

should be solved by https://github.com/vllm-project/vllm/pull/7894