QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
3.08k stars 187 forks source link

vLLM 部署 Qwen2-VL-7B-Instruct --enable-prefix-caching 时推理图像数据报错 #128

Open xiaoyuzju opened 2 months ago

xiaoyuzju commented 2 months ago

用 vllm 部署 Qwen2-VL-7B-Instruct,启用 prefix-caching 推理图像数据时报错 shape mismatch,prefix-caching + 纯文本数据不会报错,关闭 prefix-caching + 图像数据也不会报错

报错: File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 764, in _merge_multimodal_embeddings inputs_embeds[mask, :] = multimodal_embeddings RuntimeError: shape mismatch: value tensor of shape [1196, 3584] cannot be broadcast to indexing result of shape [0, 3584]

服务端命令: CUDA_VISIBLE_DEVICES=0 python3 -m vllm.entrypoints.openai.api_server --trust-remote-code --max-model-len 2048 --gpu-memory-utilization 0.9 --enable-prefix-caching --model Qwen/Qwen2-VL-7B-Instruct

环境: Collecting environment information...
/home/yujianing_docker/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/_distutils_hack/init.py:55: UserWarning: Reliance on distutils from stdlib is deprecated. Users must rely on setuptools to provide the distutils module. A void importing distutils or import setuptools first, and avoid setting SETUPTOOLS_USE_DISTUTILS=stdlib. Register concerns at https://github.com/pypa/setuptools/issues/new?template=distutils-deprecation.yml warnings.warn(
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35

Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-177-generic-x86_64-with-glibc2.35
Is CUDA available: True CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090

Nvidia driver version: 535.146.02
cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.5
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 Stepping: 7 CPU max MHz: 3900.0000 CPU min MHz: 1000.0000 BogoMIPS: 4600.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xt opology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_f ault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflush opt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 32 MiB (32 instances) L3 cache: 44 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled

Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.0.dev0 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.560.30 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.68 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.45.0.dev0 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.5.5@2e87db7e708724110a84586dc916461ee9db09f7 vLLM Build Flags: CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS SYS SYS 0,2,4,6,8,10 0 N/A GPU1 SYS X SYS SYS 0,2,4,6,8,10 0 N/A GPU2 SYS SYS X SYS 1,3,5,7,9,11 1 N/A GPU3 SYS SYS SYS X 1,3,5,7,9,11 1 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

Package Version


accelerate 0.34.0 aiohappyeyeballs 2.4.0
aiohttp 3.10.5
aiosignal 1.3.1
annotated-types 0.7.0
anyio 4.4.0
async-timeout 4.0.3
attrs 24.2.0
certifi 2024.8.30
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 3.0.0
cmake 3.30.2
datasets 2.21.0
dill 0.3.8
diskcache 5.6.3
distro 1.9.0
einops 0.8.0
exceptiongroup 1.2.2
fastapi 0.112.2
filelock 3.15.4
flash-attn 2.6.3
frozenlist 1.4.1
fsspec 2024.6.1
gguf 0.9.1
h11 0.14.0
httpcore 1.0.5
httptools 0.6.1
httpx 0.27.2
huggingface-hub 0.24.6 idna 3.8 importlib_metadata 8.4.0 interegular 0.3.3 Jinja2 3.1.4 jiter 0.5.0 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 lark 1.2.2
llvmlite 0.43.0
lm-format-enforcer 0.10.6 MarkupSafe 2.1.5 mistral_common 1.3.4 mpmath 1.3.0 msgpack 1.0.8 msgspec 0.18.6 multidict 6.0.5 multiprocess 0.70.16 nest-asyncio 1.6.0 networkx 3.3 ninja 1.11.1.1 numba 0.60.0 numpy 1.26.4 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 9.1.0.70 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-ml-py 12.560.30 nvidia-nccl-cu12 2.20.5 nvidia-nvjitlink-cu12 12.6.68 nvidia-nvtx-cu12 12.1.105 openai 1.43.0 outlines 0.0.46 packaging 24.1 pandas 2.2.2 pillow 10.4.0 pip 24.2 prometheus_client 0.20.0 prometheus-fastapi-instrumentator 7.0.0 protobuf 5.28.0 psutil 6.0.0 py-cpuinfo 9.0.0 pyairports 2.1.1 pyarrow 17.0.0 pycountry 24.6.1 pydantic 2.8.2 pydantic_core 2.20.1 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 pytz 2024.1 PyYAML 6.0.2 pyzmq 26.2.0 qwen-vl-utils 0.0.4 ray 2.35.0 referencing 0.35.1 regex 2024.7.24 requests 2.32.3 rpds-py 0.20.0 safetensors 0.4.4 sentencepiece 0.2.0 setuptools 72.1.0 six 1.16.0 sniffio 1.3.1 starlette 0.38.4 sympy 1.13.2 tiktoken 0.7.0 tokenizers 0.19.1 torch 2.4.0 torchvision 0.19.0 tqdm 4.66.5 transformers 4.45.0.dev0 triton 3.0.0 typing_extensions 4.12.2 tzdata 2024.1 urllib3 2.2.2 uvicorn 0.30.6 uvloop 0.20.0 vllm 0.5.5+cu122 vllm-flash-attn 2.6.1 watchfiles 0.24.0 websockets 13.0.1 wheel 0.43.0 xformers 0.0.27.post2 xxhash 3.5.0 yarl 1.9.7 zipp 3.20.1

fyabc commented 2 months ago

@xiaoyuzju 您好,根据您提供的报错信息,您安装的vllm应当不是正确的最新版本。请从此处拉取最新版本vllm并重新安装后重试。

xiaoyuzju commented 2 months ago

@xiaoyuzju 您好,根据您提供的报错信息,您安装的 vllm 应当不是正确的最新版本。请从 此处拉取最新版本 vllm 并重新安装后重试。

您好,非常感谢您的回复,使用拉取后的代码(commit d52741754,版本号 0.6.0)测试,服务端添加 --enable-prefix-caching 参数,使用图像数据时,仍有如下错误:

  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 816, in _merge_multimodal_embeddings                                                                    
    inputs_embeds[mask, :] = multimodal_embeddings                                                                                                                                                                                              
RuntimeError: shape mismatch: value tensor of shape [1196, 3584] cannot be broadcast to indexing result of shape [0, 3584]
fyabc commented 2 months ago

@xiaoyuzju 您好,根据您提供的报错信息,您安装的 vllm 应当不是正确的最新版本。请从 此处拉取最新版本 vllm 并重新安装后重试。

您好,非常感谢您的回复,使用拉取后的代码(commit d52741754,版本号 0.6.0)测试,服务端添加 --enable-prefix-caching 参数,使用图像数据时,仍有如下错误:

  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 816, in _merge_multimodal_embeddings                                                                    
    inputs_embeds[mask, :] = multimodal_embeddings                                                                                                                                                                                              
RuntimeError: shape mismatch: value tensor of shape [1196, 3584] cannot be broadcast to indexing result of shape [0, 3584]

@xiaoyuzju 您好,能提供一下完整的openai请求和报错信息吗?

xiaoyuzju commented 2 months ago

@xiaoyuzju 您好,根据您提供的报错信息,您安装的 vllm 应当不是正确的最新版本。请从 此处拉取最新版本 vllm 并重新安装后重试。

您好,非常感谢您的回复,使用拉取后的代码(commit d52741754,版本号 0.6.0)测试,服务端添加 --enable-prefix-caching 参数,使用图像数据时,仍有如下错误:

  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 816, in _merge_multimodal_embeddings                                                                    
    inputs_embeds[mask, :] = multimodal_embeddings                                                                                                                                                                                              
RuntimeError: shape mismatch: value tensor of shape [1196, 3584] cannot be broadcast to indexing result of shape [0, 3584]

@xiaoyuzju 您好,能提供一下完整的 openai 请求和报错信息吗?

您好,我使用的请求和服务端的报错信息如下:

请求:

import base64

from openai import OpenAI

def test():
    client = OpenAI(api_key="123", base_url="http://0.0.0.0:18000/v1/")

    # demo.jpeg is downloaded from https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg
    with open("demo.jpeg", "rb") as f:
        base64_image = base64.b64encode(f.read()).decode("utf-8")

    for i in range(10):
        response = client.chat.completions.create(
            model="Qwen/Qwen2-VL-7B-Instruct",
            messages=[
                {
                    "role": "user",
                    "content": [
                        {"type": "text", "text": "describe the image"},
                        {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}},
                    ],
                },
            ],
            extra_body={},
            temperature=0.8,
        )
        print(response.choices[0].message.content)

def main():
    test()

if __name__ == "__main__":
    main()

报错:

Future exception was never retrieved
future: <Future finished exception=RuntimeError('shape mismatch: value tensor of shape [3577, 3584] cannot be broadcast to indexing result of shape [0, 3584]')>
Traceback (most recent call last):
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/entrypoints/openai/rpc/server.py", line 115, in generate
    async for request_output in results_generator:
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 1073, in generate
    async for output in await self.add_request(
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 111, in generator
    raise result
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 53, in _log_task_completion
    return_value = task.result()
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 939, in run_engine_loop
    result = task.result()
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 868, in engine_step
    request_outputs = await self.engine.step_async(virtual_engine)
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 345, in step_async
    outputs = await self.model_executor.execute_model_async(
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 185, in execute_model_async
    output = await make_async(self.driver_worker.execute_model
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 327, in execute_model
    output = self.model_runner.execute_model(
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1538, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 864, in forward
    inputs_embeds = self._merge_multimodal_embeddings(
  File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-packages/vllm/model_executor/models/qwen2_vl.py", line 816, in _merge_multimodal_embeddings
    inputs_embeds[mask, :] = multimodal_embeddings
RuntimeError: shape mismatch: value tensor of shape [3577, 3584] cannot be broadcast to indexing result of shape [0, 3584]
fyabc commented 2 months ago

@xiaoyuzju 抱歉之前的回复给您造成了误解,目前vllm的多模态模型都不支持prefix caching,和Qwen2-VL本身无关。请参考此处