vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.03k stars 4.33k forks source link

[Bug]: Qwen2MoE Config Issue #3807

Open robertgshaw2-neuralmagic opened 6 months ago

robertgshaw2-neuralmagic commented 6 months ago

Your current environment

(vllm-env) root@engine-qa-cc76fb696-5nznf:~# python collect_env.py
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.0
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA RTX A5000
GPU 1: NVIDIA RTX A5000

Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      40 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             30
On-line CPU(s) list:                0-29
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC-Milan Processor
CPU family:                         25
Model:                              1
Thread(s) per core:                 1
Core(s) per socket:                 30
Socket(s):                          1
Stepping:                           1
BogoMIPS:                           5199.99
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip pku ospke vaes vpclmulqdq rdpid arch_capabilities
Virtualization:                     AMD-V
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          960 KiB (30 instances)
L1i cache:                          960 KiB (30 instances)
L2 cache:                           15 MiB (30 instances)
L3 cache:                           32 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-29
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.1.2
[pip3] triton==2.1.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV4     0-29    0               N/A
GPU1    NV4      X      0-29    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

Launch:

export MODEL_ID=Qwen/Qwen1.5-MoE-A2.7B
python3 -m vllm.entrypoints.openai.api_server --model $MODEL_ID --max-model-len 4096 --disable-log-requests --tensor-parallel-size 1

Result:

(vllm-env) root@engine-qa-cc76fb696-5nznf:~# python3 -m vllm.entrypoints.openai.api_server --model $MODEL_ID --max-model-len 4096 --disable-log-requests --tensor-parallel-size 1
INFO 04-02 23:43:29 api_server.py:149] vLLM API server version 0.4.0.post1
INFO 04-02 23:43:29 api_server.py:150] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, served_model_name=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='Qwen/Qwen1.5-MoE-A2.7B', tokenizer=None, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', max_model_len=4096, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, forced_num_gpu_blocks=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=False, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, scheduler_delay_factor=0.0, enable_chunked_prefill=False, engine_use_ray=False, disable_log_requests=True, max_log_len=None)
Traceback (most recent call last):
  File "/root/vllm-env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1155, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "/root/vllm-env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 852, in __getitem__
    raise KeyError(key)
KeyError: 'qwen2_moe'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 157, in <module>
    engine = AsyncLLMEngine.from_engine_args(
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 331, in from_engine_args
    engine_configs = engine_args.create_engine_configs()
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 390, in create_engine_configs
    model_config = ModelConfig(
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/config.py", line 121, in __init__
    self.hf_config = get_config(self.model, trust_remote_code, revision,
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 37, in get_config
    raise e
  File "/root/vllm-env/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 22, in get_config
    config = AutoConfig.from_pretrained(
  File "/root/vllm-env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1157, in from_pretrained
    raise ValueError(
ValueError: The checkpoint you are trying to load has model type `qwen2_moe` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
robertgshaw2-neuralmagic commented 6 months ago

Reason for issue is that the latest transformers version 4.39.3 does not have Qwen2MoE support yet.

Once they push up the next release this will be resolved.

markusza commented 6 months ago

If someone stumbles on this and can't wait :)

git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .
Jack-mi commented 6 months ago

Reason for issue is that the latest transformers version 4.39.3 does not have Qwen2MoE support yet.

Once they push up the next release this will be resolved.

This is clearly at odds with what Qwen officials say.

robertgshaw2-neuralmagic commented 6 months ago

You can just look on huggingface/transformers to see:

Qwen2 does exist in 4.39.3, but Qwen2MoE does not