vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.59k stars 4.46k forks source link

[Bug]: ValueError: could not broadcast input array from shape (513,) into shape (512,) #8432

Open ndao600 opened 1 month ago

ndao600 commented 1 month ago

Your current environment

Collecting environment information... /home/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/cuda/init.py:128: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35

Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: 12.5.82 CUDA_MODULE_LOADING set to: N/A GPU models and configuration: GPU 0: NVIDIA RTX 4000 Ada Generation GPU 1: NVIDIA RTX 4000 Ada Generation GPU 2: NVIDIA RTX 4000 Ada Generation GPU 3: NVIDIA RTX 4000 Ada Generation GPU 4: NVIDIA RTX 4000 Ada Generation GPU 5: NVIDIA RTX 4000 Ada Generation GPU 6: NVIDIA RTX 4000 Ada Generation GPU 7: NVIDIA RTX 4000 Ada Generation

Nvidia driver version: 555.99 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: AuthenticAMD Model name: AMD Ryzen Threadripper 7960X 24-Cores CPU family: 25 Model: 24 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 BogoMIPS: 8387.54 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm Virtualization: AMD-V Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 768 KiB (24 instances) L1i cache: 768 KiB (24 instances) L2 cache: 24 MiB (24 instances) L3 cache: 32 MiB (1 instance) Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.2 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.560.30 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.68 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.44.2 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: N/A vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS SYS SYS SYS SYS SYS SYS N/A GPU1 SYS X SYS SYS SYS SYS SYS SYS N/A GPU2 SYS SYS X SYS SYS SYS SYS SYS N/A GPU3 SYS SYS SYS X SYS SYS SYS SYS N/A GPU4 SYS SYS SYS SYS X SYS SYS SYS N/A GPU5 SYS SYS SYS SYS SYS X SYS SYS N/A GPU6 SYS SYS SYS SYS SYS SYS X SYS N/A GPU7 SYS SYS SYS SYS SYS SYS SYS X N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

Model Input Dumps

INFO: 127.0.0.1:47618 - "POST /v1/completions HTTP/1.1" 200 OK ERROR 09-12 19:15:10 async_llm_engine.py:63] Engine background task failed ERROR 09-12 19:15:10 async_llm_engine.py:63] Traceback (most recent call last): ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 53, in _log_task_completion ERROR 09-12 19:15:10 async_llm_engine.py:63] return_value = task.result() ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 939, in run_engine_loop ERROR 09-12 19:15:10 async_llm_engine.py:63] result = task.result() ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 868, in engine_step ERROR 09-12 19:15:10 async_llm_engine.py:63] request_outputs = await self.engine.step_async(virtual_engine) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 345, in step_async ERROR 09-12 19:15:10 async_llm_engine.py:63] output = await self.model_executor.execute_model_async( ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/distributed_gpu_executor.py", line 177, in execute_model_async ERROR 09-12 19:15:10 async_llm_engine.py:63] return await self._driver_execute_model_async(execute_model_req) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_gpu_executor.py", line 231, in _driver_execute_model_async ERROR 09-12 19:15:10 async_llm_engine.py:63] return await self.driver_exec_model(execute_model_req) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/concurrent/futures/thread.py", line 58, in run ERROR 09-12 19:15:10 async_llm_engine.py:63] result = self.fn(*self.args, *self.kwargs) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/worker_base.py", line 303, in execute_model ERROR 09-12 19:15:10 async_llm_engine.py:63] inputs = self.prepare_input(execute_model_req) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_worker.py", line 164, in prepare_input ERROR 09-12 19:15:10 async_llm_engine.py:63] kwargs) = self._get_driver_input_and_broadcast(execute_model_req) ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_worker.py", line 62, in _get_driver_input_and_broadcast ERROR 09-12 19:15:10 async_llm_engine.py:63] self.model_runner.prepare_model_input( ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_model_runner.py", line 254, in prepare_model_input ERROR 09-12 19:15:10 async_llm_engine.py:63] frozen_model_input = self._base_model_runner.prepare_model_input( ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1380, in prepare_model_input ERROR 09-12 19:15:10 async_llm_engine.py:63] model_input = self._prepare_model_input_tensors( ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1042, in _prepare_model_input_tensors ERROR 09-12 19:15:10 async_llm_engine.py:63] return builder.build() # type: ignore ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 741, in build ERROR 09-12 19:15:10 async_llm_engine.py:63] attn_metadata = self.attn_metadata_builder.build( ERROR 09-12 19:15:10 async_llm_engine.py:63] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/attention/backends/flash_attn.py", line 467, in build ERROR 09-12 19:15:10 async_llm_engine.py:63] input_block_tables[i, :len(block_table)] = block_table ERROR 09-12 19:15:10 async_llm_engine.py:63] ~~~~~~^^^^^^^^^^^^^^^^^^^^^^ ERROR 09-12 19:15:10 async_llm_engine.py:63] ValueError: could not broadcast input array from shape (513,) into shape (512,) Exception in callback functools.partial(<function _log_task_completion at 0x7f81020a6ca0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f80fe6315e0>>) handle: <Handle functools.partial(<function _log_task_completion at 0x7f81020a6ca0>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f80fe6315e0>>)> Traceback (most recent call last): File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 53, in _log_task_completion return_value = task.result() ^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 939, in run_engine_loop result = task.result() ^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 868, in engine_step request_outputs = await self.engine.step_async(virtual_engine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 345, in step_async output = await self.model_executor.execute_model_async( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/distributed_gpu_executor.py", line 177, in execute_model_async return await self._driver_execute_model_async(execute_model_req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/executor/multiproc_gpu_executor.py", line 231, in _driver_execute_model_async return await self.driver_exec_model(execute_model_req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/concurrent/futures/thread.py", line 58, in run result = self.fn(self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/worker_base.py", line 303, in execute_model inputs = self.prepare_input(execute_model_req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_worker.py", line 164, in prepare_input kwargs) = self._get_driver_input_and_broadcast(execute_model_req) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_worker.py", line 62, in _get_driver_input_and_broadcast self.model_runner.prepare_model_input( File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/multi_step_model_runner.py", line 254, in prepare_model_input frozen_model_input = self._base_model_runner.prepare_model_input( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1380, in prepare_model_input model_input = self._prepare_model_input_tensors( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1042, in _prepare_model_input_tensors return builder.build() # type: ignore ^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 741, in build attn_metadata = self.attn_metadata_builder.build( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/attention/backends/flash_attn.py", line 467, in build input_block_tables[i, :len(block_table)] = block_table


ValueError: could not broadcast input array from shape (513,) into shape (512,)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/home/0/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/async_llm_engine.py", line 65, in _log_task_completion
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
ERROR 09-12 19:15:10 client.py:266] Got Unhealthy response from RPC Server
ERROR 09-12 19:15:10 client.py:412] AsyncEngineDeadError('Background loop is stopped.')

### 🐛 Describe the bug

The error: "ValueError: could not broadcast input array from shape (513,) into shape (512,)"
It looks like when I use --num-scheduler-step of any values I experience the above message mid-processing. When I remove it, I no longer have the same error. 
I have tried to change context length, max tokens, batch size, reinstall vllm, etc.. nothing helps. 
Thank you for the help!

### Before submitting a new issue...

- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
SolitaryThinker commented 1 month ago

Fixed in https://github.com/vllm-project/vllm/pull/8340

ashgold commented 1 month ago

This is the same issue as #8068. Please refer to the above issue for more information.

I believe this issue was fixed in v0.6.1 (#8340) What version did you use? It would be nice if you could specify the version when reporting a bug.

JieChen91 commented 2 weeks ago

Hi! I encounter the same error with the latest vllm v0.6.4.

tail-recursion commented 6 days ago

I am also getting the same error with the latest vllm.

zifeitong commented 6 days ago

Can you share the models you're using when you see the error?

Is there a reliable way to reproduce it?

tail-recursion commented 6 days ago

Setting max_seq_len_to_capture=max_model_len fixed it for me.