vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.41k stars 4.6k forks source link

[Bug]: RuntimeError on A800 using vllm0.6.1.post2 #8686

Open double-vin opened 1 month ago

double-vin commented 1 month ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.0 Libc version: glibc-2.35 Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A800 80GB PCIe GPU 1: NVIDIA A800 80GB PCIe GPU 2: NVIDIA A800 80GB PCIe GPU 3: NVIDIA A800 80GB PCIe GPU 4: NVIDIA A800 80GB PCIe GPU 5: NVIDIA A800 80GB PCIe GPU 6: NVIDIA A800 80GB PCIe GPU 7: NVIDIA A800 80GB PCIe Nvidia driver version: 535.54.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 43 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Vendor ID: HygonGenuine BIOS Vendor ID: Chengdu Hygon Model name: Hygon C86 7385 32-core Processor BIOS Model name: Hygon C86 7385 32-core Processor CPU family: 24 Model: 2 Thread(s) per core: 2 Core(s) per socket: 32 Socket(s): 2 Stepping: 2 BogoMIPS: 3999.97 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sme sev sev_es Virtualization: AMD-V L1d cache: 2 MiB (64 instances) L1i cache: 4 MiB (64 instances) L2 cache: 32 MiB (64 instances) L3 cache: 128 MiB (16 instances) NUMA node(s): 8 NUMA node0 CPU(s): 0-7,64-71 NUMA node1 CPU(s): 8-15,72-79 NUMA node2 CPU(s): 16-23,80-87 NUMA node3 CPU(s): 24-31,88-95 NUMA node4 CPU(s): 32-39,96-103 NUMA node5 CPU(s): 40-47,104-111 NUMA node6 CPU(s): 48-55,112-119 NUMA node7 CPU(s): 56-63,120-127 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.555.43 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.5.82 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] sentence-transformers==3.0.1 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.2 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi [conda] nvidia-ml-py 12.555.43 pypi_0 pypi [conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.5.82 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi [conda] pyzmq 26.2.0 pypi_0 pypi [conda] sentence-transformers 3.0.1 pypi_0 pypi [conda] torch 2.4.0 pypi_0 pypi [conda] torchvision 0.19.0 pypi_0 pypi [conda] transformers 4.44.2 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.post2@9ba0817ff1eb514f51cc6de9cb8e16c98d6ee44f vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X PIX PXB PXB SYS SYS SYS SYS SYS SYS 24-31,88-95 3 N/A GPU1 PIX X PXB PXB SYS SYS SYS SYS SYS SYS 24-31,88-95 3 N/A GPU2 PXB PXB X PXB SYS SYS SYS SYS SYS SYS 24-31,88-95 3 N/A GPU3 PXB PXB PXB X SYS SYS SYS SYS SYS SYS 24-31,88-95 3 N/A GPU4 SYS SYS SYS SYS X PIX PXB PXB SYS SYS 56-63,120-127 7 N/A GPU5 SYS SYS SYS SYS PIX X PXB PXB SYS SYS 56-63,120-127 7 N/A GPU6 SYS SYS SYS SYS PXB PXB X PXB SYS SYS 56-63,120-127 7 N/A GPU7 SYS SYS SYS SYS PXB PXB PXB X SYS SYS 56-63,120-127 7 N/A NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X PIX NIC1 SYS SYS SYS SYS SYS SYS SYS SYS PIX X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 NIC1: mlx5_1 ```

Model Input Dumps

No response

🐛 Describe the bug

Example of command: python benchmark_throughput.py --trust-remote-code --enforce-eager --dtype float16 --num-prompts 1 --input-len 2 --output-len 128 --model Llama-2-7b-chat-hf

Output: Namespace(backend='vllm', dataset=None, input_len=2, output_len=128, model='Llama-2-7b-chat-hf', tokenizer='Llama-2-7b-chat-hf/', quantization=None, tensor_parallel_size=1, n=1, use_beam_search=False, num_iters_warmup=1, num_prompts=1, seed=0, hf_max_batch_size=None, trust_remote_code=True, max_model_len=None, dtype='float16', gpu_memory_utilization=0.9, enforce_eager=True, kv_cache_dtype='auto', quantization_param_path=None, device='auto', num_scheduler_steps=1, use_v2_block_manager=False, enable_prefix_caching=False, enable_chunked_prefill=False, max_num_batched_tokens=None, download_dir=None, output_json=None, distributed_executor_backend=None, load_format='auto', disable_async_output_proc=False, async_engine=False, disable_frontend_multiprocessing=False) Traceback (most recent call last): File "/data/test/benchmark_throughput.py", line 617, in main(args) File "/data/test/benchmark_throughput.py", line 382, in main elapsed_time = run_vllm(*run_args) File "/data/test/benchmark_throughput.py", line 98, in run_vllm llm = LLM( File "/opt/miniconda3/envs/vllm/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 178, in init self.llm_engine = LLMEngine.from_engine_args( File "/opt/miniconda3/envs/vllm/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 547, in from_engine_args engine_config = engine_args.create_engine_config() File "/opt/miniconda3/envs/vllm/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 843, in create_engine_config device_config = DeviceConfig(device=self.device) File "/opt/miniconda3/envs/vllm/lib/python3.10/site-packages/vllm/config.py", line 1094, in init self.device = torch.device(self.device_type) TypeError: device() received an invalid combination of arguments - got (bool), but expected one of:

Before submitting a new issue...

youkaichao commented 1 month ago

can you debug to see why it errors? it looks irrelevant to the gpu, but may be some bugs in the benchmark script.

double-vin commented 1 month ago

There was a small incident, but I have already resolved it. May I ask why we didn't do a warmup before infer here?