vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
25.98k stars 3.81k forks source link

[Bug]: MOE模型,2卡推理,报错AssertionError("Invalid device id") #5527

Open Elissa0723 opened 2 months ago

Elissa0723 commented 2 months ago

Your current environment

PyTorch version: 2.1.2+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.29.5 Libc version: glibc-2.35

Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A800-SXM4-80GB GPU 1: NVIDIA A800-SXM4-80GB GPU 2: NVIDIA A800-SXM4-80GB GPU 3: NVIDIA A800-SXM4-80GB

Nvidia driver version: 470.199.02 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: 架构: x86_64 CPU 运行模式: 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual 字节序: Little Endian CPU: 128 在线 CPU 列表: 0-127 厂商 ID: GenuineIntel 型号名称: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz CPU 系列: 6 型号: 106 每个核的线程数: 2 每个座的核数: 32 座: 2 步进: 6 CPU 最大 MHz: 3500.0000 CPU 最小 MHz: 800.0000 BogoMIPS: 5800.00 标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities 虚拟化: VT-x L1d 缓存: 3 MiB (64 instances) L1i 缓存: 2 MiB (64 instances) L2 缓存: 80 MiB (64 instances) L3 缓存: 96 MiB (2 instances) NUMA 节点: 2 NUMA 节点0 CPU: 0-31,64-95 NUMA 节点1 CPU: 32-63,96-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

🐛 Describe the bug

在对Qwen2-57B-A14B-Instruct进行两卡推理的时候,运行如下指令:

os.environ['CUDA_VISIBLE_DEVICES'] = '0,1' model_dir = '/publicdata/huggingface.co/Qwen/Qwen2-57B-A14B-Instruct' model_type = ModelType.qwen2_57b_a14b_instruct

template_type = get_default_template_type(model_type) llm_engine = get_vllm_engine(model_type, model_id_or_path=model_dir, tensor_parallel_size=2) tokenizer = llm_engine.hf_tokenizer template = get_template(template_type, tokenizer)

报错如下: (RayWorkerVllm pid=2689517) INFO 06-14 11:52:26 selector.py:16] Using FlashAttention backend. (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] Error executing method init_device. This might cause deadlock in distributed execution. (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] Traceback (most recent call last): (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] File "/opt/conda/lib/python3.10/site-packages/vllm/engine/ray_utils.py", line 37, in execute_method (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] return executor(*args, **kwargs) (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] File "/opt/conda/lib/python3.10/site-packages/vllm/worker/worker.py", line 93, in init_device (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] _check_if_gpu_supports_dtype(self.model_config.dtype) (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] File "/opt/conda/lib/python3.10/site-packages/vllm/worker/worker.py", line 309, in _check_if_gpu_supports_dtype (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] compute_capability = torch.cuda.get_device_capability() (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 435, in get_device_capability (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] prop = get_device_properties(device) (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] File "/opt/conda/lib/python3.10/site-packages/torch/cuda/init.py", line 452, in get_device_properties (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] raise AssertionError("Invalid device id") (RayWorkerVllm pid=2689517) ERROR 06-14 11:52:26 ray_utils.py:44] AssertionError: Invalid device id

youkaichao commented 2 months ago

can you try https://github.com/vllm-project/vllm/pull/5473 ? it should fix your error i think.

Elissa0723 commented 2 months ago

can you try #5473 ? it should fix your error i think.

I pulled the latest vllm code and tried to install it but it encountered some problems during installation

if I use pip install -e . ,it will get stuck here:

Looking in indexes: https://mirrors.aliyun.com/pypi/simple Obtaining file:///workspace/huj11%40xiaopeng.com/code/vllm Installing build dependencies ... |

if I use pip install --editable ./ --no-build-isolation, it will get stuck here:

Looking in indexes: https://mirrors.aliyun.com/pypi/simple Obtaining file:///workspace/huj11%40xiaopeng.com/code/vllm Checking if build backend supports build_editable ... done Preparing editable metadata (pyproject.toml) ... done Requirement already satisfied: cmake>=3.21 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (3.29.5) Requirement already satisfied: ninja in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (1.11.1.1) Requirement already satisfied: psutil in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (5.9.7) Requirement already satisfied: sentencepiece in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.1.99) Requirement already satisfied: numpy in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (1.26.3) Requirement already satisfied: requests in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (2.31.0) Requirement already satisfied: py-cpuinfo in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (9.0.0) Requirement already satisfied: transformers>=4.40.0 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (4.41.2) Requirement already satisfied: tokenizers>=0.19.1 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.19.1) Requirement already satisfied: fastapi in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.109.0) Requirement already satisfied: openai in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (1.33.0) Requirement already satisfied: uvicorn[standard] in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.27.0.post1) Requirement already satisfied: pydantic>=2.0 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (2.5.3) Requirement already satisfied: prometheus-client>=0.18.0 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.19.0) Requirement already satisfied: prometheus-fastapi-instrumentator>=7.0.0 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (7.0.0) Requirement already satisfied: tiktoken==0.6.0 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.6.0) Requirement already satisfied: lm-format-enforcer==0.10.1 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.10.1) Requirement already satisfied: outlines==0.0.34 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (0.0.34) Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (4.9.0) Requirement already satisfied: filelock>=3.10.4 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (3.13.1) Requirement already satisfied: ray>=2.9 in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (2.9.1) Requirement already satisfied: nvidia-ml-py in /opt/conda/lib/python3.10/site-packages (from vllm==0.4.2) (12.555.43) Collecting vllm-nccl-cu12<2.19,>=2.18 (from vllm==0.4.2) Downloading https://mirrors.aliyun.com/pypi/packages/41/07/c1be8f4ffdc257646dda26470b803487150c732aa5c9f532dd789f186a54/vllm_nccl_cu12-2.18.1.0.4.0.tar.gz (6.2 kB)