Closed CrazyboyQCD closed 2 months ago
PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.0 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti GPU 1: NVIDIA GeForce RTX 2080 Ti Nvidia driver version: 555.99 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 20 On-line CPU(s) list: 0-19 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) W-2150B CPU @ 3.00GHz CPU family: 6 Model: 85 Thread(s) per core: 2 Core(s) per socket: 10 Socket(s): 1 Stepping: 4 BogoMIPS: 6000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 320 KiB (10 instances) L1i cache: 320 KiB (10 instances) L2 cache: 10 MiB (10 instances) L3 cache: 13.8 MiB (1 instance) Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown Versions of relevant libraries: [pip3] flashinfer==0.0.8+cu121torch2.3 [pip3] numpy==1.26.4 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] torch==2.3.0 [pip3] torchvision==0.18.0 [pip3] transformers==4.42.3 [pip3] triton==2.3.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.5.1 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV2 N/A GPU1 NV2 X N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks
Command:
docker run --gpus all --net Qwen --shm-size 1g --name qwen-57b-vllm -v ~/.cache/huggingface:/root/.cache/huggingface -p 8080:8000 vllm/vllm-openai:latest --model Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4 --host 0.0.0.0 --max-model-len 8192 --quantization gptq
Follow up output:
INFO 07-10 01:04:35 api_server.py:206] vLLM API server version 0.5.1 INFO 07-10 01:04:35 api_server.py:207] args: Namespace(host='0.0.0.0', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization='gptq', rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None) WARNING 07-10 01:04:36 config.py:244] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models. INFO 07-10 01:04:36 llm_engine.py:169] Initializing an LLM engine (v0.5.1) with config: model='Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4', speculative_config=None, tokenizer='Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=gptq, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4, use_v2_block_manager=False, enable_prefix_caching=False) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. WARNING 07-10 01:04:45 utils.py:562] Using 'pin_memory=False' as WSL is detected. This may slow down the performance. INFO 07-10 01:04:45 selector.py:153] Cannot use FlashAttention-2 backend for Volta and Turing GPUs. INFO 07-10 01:04:45 selector.py:53] Using XFormers backend. INFO 07-10 01:04:48 selector.py:153] Cannot use FlashAttention-2 backend for Volta and Turing GPUs. INFO 07-10 01:04:48 selector.py:53] Using XFormers backend. [rank0]: Traceback (most recent call last): [rank0]: File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main [rank0]: return _run_code(code, main_globals, None, [rank0]: File "/usr/lib/python3.10/runpy.py", line 86, in _run_code [rank0]: exec(code, run_globals) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 216, in <module> [rank0]: engine = AsyncLLMEngine.from_engine_args( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 431, in from_engine_args [rank0]: engine = cls( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 360, in __init__ [rank0]: self.engine = self._init_engine(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 507, in _init_engine [rank0]: return engine_class(*args, **kwargs) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 243, in __init__ [rank0]: self.model_executor = executor_class( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 128, in __init__ [rank0]: super().__init__(model_config, cache_config, parallel_config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 42, in __init__ [rank0]: self._init_executor() [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/executor/gpu_executor.py", line 24, in _init_executor [rank0]: self.driver_worker.load_model() [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 133, in load_model [rank0]: self.model_runner.load_model() [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 243, in load_model [rank0]: self.model = get_model( [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/__init__.py", line 21, in get_model [rank0]: return loader.load_model(model_config=model_config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 267, in load_model [rank0]: model = _initialize_model(model_config, self.load_config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 104, in _initialize_model [rank0]: return model_class(config=model_config.hf_config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_moe.py", line 364, in __init__ [rank0]: self.model = Qwen2MoeModel(config, cache_config, quant_config) [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_moe.py", line 324, in __init__ [rank0]: self.layers = nn.ModuleList([ [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_moe.py", line 325, in <listcomp> [rank0]: Qwen2MoeDecoderLayer(config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_moe.py", line 265, in __init__ [rank0]: self.mlp = Qwen2MoeSparseMoeBlock(config=config, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2_moe.py", line 102, in __init__ [rank0]: self.experts = FusedMoE(num_experts=config.num_experts, [rank0]: File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 128, in __init__ [rank0]: assert self.quant_method is not None [rank0]: AssertionError
same qeustion,did you slove it?
MoE quantization is not supported at this moment, see (#5343).
Your current environment
🐛 Describe the bug
Command:
Follow up output: