vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.2k stars 4.57k forks source link

[Bug]: Loading mistralai/Mixtral-8x22B-Instruct-v0.1 raises TypeError: a bytes-like object is required, not 'str' #9821

Closed brnrc closed 2 weeks ago

brnrc commented 2 weeks ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Debian GNU/Linux 11 (bullseye) (x86_64) GCC version: (Debian 10.2.1-6) 10.2.1 20210110 Clang version: Could not collect CMake version: version 3.30.5 Libc version: glibc-2.31 Python version: 3.10.15 | packaged by conda-forge | (main, Sep 20 2024, 16:37:05) [GCC 13.3.0] (64-bit runtime) Python platform: Linux-5.10.0-32-cloud-amd64-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB GPU 2: NVIDIA A100-SXM4-80GB GPU 3: NVIDIA A100-SXM4-80GB GPU 4: NVIDIA A100-SXM4-80GB GPU 5: NVIDIA A100-SXM4-80GB GPU 6: NVIDIA A100-SXM4-80GB GPU 7: NVIDIA A100-SXM4-80GB Nvidia driver version: 550.90.07 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) CPU @ 2.20GHz Stepping: 7 CPU MHz: 2200.198 BogoMIPS: 4400.39 Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.5 MiB L1i cache: 1.5 MiB L2 cache: 48 MiB L3 cache: 77 MiB NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities Versions of relevant libraries: [pip3] numpy==1.25.2 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==11.495.46 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.77 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.46.0 [pip3] triton==3.0.0 [conda] numpy 1.25.2 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 11.495.46 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.6.77 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.46.0 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.3.post1 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 0-23,48-71 0 N/A GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 0-23,48-71 0 N/A GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 0-23,48-71 0 N/A GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 0-23,48-71 0 N/A GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 24-47,72-95 1 N/A GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 24-47,72-95 1 N/A GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 24-47,72-95 1 N/A GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X 24-47,72-95 1 N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks ```

🐛 Describe the bug

When trying to serve mistralai/Mixtral-8x22B-Instruct-v0.1 in a single-node with 8 GPUs I get the error below:

VLLM_LOGGING_LEVEL=DEBUG python -m vllm.entrypoints.openai.api_server  \
  --model mistralai/Mixtral-8x22B-Instruct-v0.1   \
  --tensor-parallel-size 8   \
  --tokenizer-mode="mistral"

...

INFO 10-29 22:43:58 multiproc_worker_utils.py:120] Killing local vLLM worker processes
DEBUG 10-29 22:43:58 client.py:170] Waiting for output from MQLLMEngine.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/vllm/worker/model_runner_base.py", line 116, in _wrapper
    return func(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 1658, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/mixtral.py", line 380, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/mixtral.py", line 300, in forward
    hidden_states, residual = layer(positions, hidden_states,
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/mixtral.py", line 244, in forward
    hidden_states = self.block_sparse_moe(hidden_states)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/mixtral.py", line 101, in forward
    final_hidden_states = self.experts(hidden_states, router_logits)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 474, in forward
    final_hidden_states = self.quant_method.apply(
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 495, in apply
    return fused_experts(x,
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 510, in fused_experts
    config = get_config_func(M)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 369, in try_get_optimal_moe_config
    configs = get_moe_configs(E, N, dtype)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 310, in get_moe_configs
    json_file_name = get_config_file_name(E, N, dtype)
  File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 291, in get_config_file_name
    device_name = current_platform.get_device_name().replace(" ", "_")
TypeError: a bytes-like object is required, not 'str'

You can reproduce the error with the cli above (and a similar environment) or with the snippet below:

from vllm.platforms import current_platform
print(current_platform)
current_platform.get_device_name().replace(" ", "_")
$ python
Python 3.10.15 | packaged by conda-forge | (main, Sep 20 2024, 16:37:05) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from vllm.platforms import current_platform
>>> print(current_platform)
<vllm.platforms.cuda.CudaPlatform object at 0x7fa6bca82b30>
>>> name = current_platform.get_device_name()
>>> name
b'NVIDIA A100-SXM4-80GB'
>>> name.replace(" ", "_")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: a bytes-like object is required, not 'str'
brnrc commented 2 weeks ago

I managed to fix this by explicitly casting the device name to str, since current_platform.get_device_name() is returning bytes instead of str. After applying the patch I can load the model without problems.

diff --git a/vllm/model_executor/layers/fused_moe/fused_moe.py b/vllm/model_executor/layers/fused_moe/fused_moe.py
index 1cf5c225..b43bb39a 100644
--- a/vllm/model_executor/layers/fused_moe/fused_moe.py
+++ b/vllm/model_executor/layers/fused_moe/fused_moe.py
@@ -288,7 +288,7 @@ def invoke_fused_moe_kernel(A: torch.Tensor, B: torch.Tensor, C: torch.Tensor,

 def get_config_file_name(E: int, N: int, dtype: Optional[str]) -> str:
-    device_name = current_platform.get_device_name().replace(" ", "_")
+    device_name = str(current_platform.get_device_name()).replace(" ", "_")
     dtype_selector = "" if not dtype else f",dtype={dtype}"
     return f"E={E},N={N},device_name={device_name}{dtype_selector}.json"
jeejeelee commented 2 weeks ago

It look likes your nvidia-ml-py is old , you can try updating it :

pip install nvidia-ml-py==12.560.30
brnrc commented 2 weeks ago

It look likes your nvidia-ml-py is old , you can try updating it :

pip install nvidia-ml-py==12.560.30

@jeejeelee Thank you for the quick reply. What do you think of setting a minimum version for nvidia-ml-py? Seems like the version pinning was dropped after moving away from pynvml (https://github.com/vllm-project/vllm/commit/e4bf860a54a302ccb2d80489368d5df686e46923).

- nvidia-ml-py # for pynvml package
+ nvidia-ml-py >= 12.560.30 # for pynvml package 
jeejeelee commented 2 weeks ago

IMHO, we should add a minimum version, cc @youkaichao

youkaichao commented 2 weeks ago

sounds good, we can specify nvidia-ml-py >= 12.560.30