vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.91k stars 3.95k forks source link

[Bug]: Phi-3-V with vllm serve Pickling function error #8288

Closed BabyChouSr closed 1 week ago

BabyChouSr commented 1 week ago

Your current environment

Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.0 Libc version: glibc-2.35

Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.5.0-1024-azure-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe GPU 1: NVIDIA A100 80GB PCIe GPU 2: NVIDIA A100 80GB PCIe GPU 3: NVIDIA A100 80GB PCIe

Nvidia driver version: 535.183.06 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Vendor ID: AuthenticAMD Model name: AMD EPYC 7V13 64-Core Processor CPU family: 25 Model: 1 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 Stepping: 1 yte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Vendor ID: AuthenticAMD Model name: AMD EPYC 7V13 64-Core Processor CPU family: 25 Model: 1 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 Stepping: 1 BogoMIPS: 4890.88 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 3 MiB (96 instances) L1i cache: 3 MiB (96 instances) L2 cache: 48 MiB (96 instances) L3 cache: 384 MiB (12 instances) NUMA node(s): 4 NUMA node0 CPU(s): 0-23 NUMA node1 CPU(s): 24-47 NUMA node2 CPU(s): 48-71 NUMA node3 CPU(s): 72-95 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] flashinfer==0.1.3+cu121torch2.3 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.20 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.1.1 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.1 [pip3] triton==3.0.0 [conda] flashinfer 0.1.3+cu121torch2.3 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.560.30 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.20 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.1.1 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.44.1 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A

vLLM Version: 0.6.0@4ef41b84766670c1bd8079f58d35bf32b5bcb3ab vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: ^[[4mGPU0 GPU1 GPU2 GPU3 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID^[[0m GPU0 X NV12 SYS SYS NODE 0-23 0 N/A GPU1 NV12 X SYS SYS SYS 24-47 1 N/A GPU2 SYS SYS X NV12 SYS 48-71 2 N/A GPU3 SYS SYS NV12 X SYS 72-95 3 N/A NIC0 NODE SYS SYS SYS X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0

🐛 Describe the bug

Command:

CUDA_VISIBLE_DEVICES=1 vllm serve microsoft/Phi-3-vision-128k-instruct --port 21003 --max-model-len 4096 --trust-remote-code --gpu-memory-utilization 0.20

Error traceback:

INFO 09-09 07:43:51 gpu_executor.py:122] # GPU blocks: 1279, # CPU blocks: 682
INFO 09-09 07:43:54 model_runner.py:1217] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 09-09 07:43:54 model_runner.py:1221] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 09-09 07:44:03 model_runner.py:1335] Graph capturing finished in 10 secs.
INFO 09-09 07:44:03 server.py:228] vLLM ZMQ RPC Server was interrupted.
Traceback (most recent call last):
  File "/home/lmsys/miniconda3/envs/vllm-source/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/home/lmsys/vllm/vllm/scripts.py", line 165, in main
    args.dispatch_function(args)
  File "/home/lmsys/vllm/vllm/scripts.py", line 37, in serve
    asyncio.run(run_server(args))
  File "/home/lmsys/miniconda3/envs/vllm-source/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/lmsys/miniconda3/envs/vllm-source/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/lmsys/vllm/vllm/entrypoints/openai/api_server.py", line 498, in run_server
    async with build_async_engine_client(args) as async_engine_client:
  File "/home/lmsys/miniconda3/envs/vllm-source/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/lmsys/vllm/vllm/entrypoints/openai/api_server.py", line 110, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/home/lmsys/miniconda3/envs/vllm-source/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/lmsys/vllm/vllm/entrypoints/openai/api_server.py", line 184, in build_async_engine_client_from_engine_args
    await rpc_client.setup()
  File "/home/lmsys/vllm/vllm/entrypoints/openai/rpc/client.py", line 158, in setup
    self.model_config = await self._get_model_config_rpc()
  File "/home/lmsys/vllm/vllm/entrypoints/openai/rpc/client.py", line 292, in _get_model_config_rpc
    return await self._send_get_data_rpc_request(
  File "/home/lmsys/vllm/vllm/entrypoints/openai/rpc/client.py", line 223, in _send_get_data_rpc_request
    raise data
_pickle.PicklingError: Can't pickle <class 'transformers_modules.microsoft.Phi-3-vision-128k-instruct.c45209e90a4c4f7d16b2e9d48503c7f3e83623ed.configuration_phi3_v.Phi3VConfig'>: it's not the same object as transformers_modules.microsoft.Phi-3-vision-128k-instruct.c45209e90a4c4f7d16b2e9d48503c7f3e83623ed.configuration_phi3_v.Phi3VConfig

Before submitting a new issue...

youkaichao commented 1 week ago

is it your first time running the model? I suggest you first download the model and then run vllm.

youkaichao commented 1 week ago

see https://docs.vllm.ai/en/stable/getting_started/debugging.html#debugging-hang-crash-issues "Downloading a model" part.

BabyChouSr commented 1 week ago

no, this worked on vllm 0.5.4 in the past.

EDIT: some context: I can also serve phi3.5-vision, internvl2-4b which have similar architecture.

youkaichao commented 1 week ago

I think it might be related with https://github.com/vllm-project/vllm/pull/6751 , but I didn't have bandwidth to investigate :(

also cc @robertgshaw2-neuralmagic

robertgshaw2-neuralmagic commented 1 week ago

You can run with —disable-frontend-multiprocessing to avoid the issue for now. Will work on a permanent fix

BabyChouSr commented 1 week ago

@robertgshaw2-neuralmagic @youkaichao Thanks! This worked for me :)