vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
22.53k stars 3.18k forks source link

[Usage]: Set dtype for VLLM using YAML #3503

Closed telekoteko closed 3 months ago

telekoteko commented 3 months ago

Your current environment

Collecting environment information... PyTorch version: 2.2.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.19.0-1010-nvidia-lowlatency-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti GPU 1: NVIDIA GeForce RTX 2080 Ti GPU 2: NVIDIA GeForce RTX 2080 Ti GPU 3: NVIDIA GeForce RTX 2080 Ti GPU 4: NVIDIA GeForce RTX 2080 Ti GPU 5: NVIDIA GeForce RTX 2080 Ti GPU 6: NVIDIA GeForce RTX 2080 Ti GPU 7: NVIDIA GeForce RTX 2080 Ti

Nvidia driver version: 550.54.14 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz CPU family: 6 Model: 79 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 Stepping: 1 CPU max MHz: 3000.0000 CPU min MHz: 1200.0000 BogoMIPS: 4190.30 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d Virtualization: VT-x L1d cache: 512 KiB (16 instances) L1i cache: 512 KiB (16 instances) L2 cache: 4 MiB (16 instances) L3 cache: 40 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-7,16-23 NUMA node1 CPU(s): 8-15,24-31 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable

Versions of relevant libraries: [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.2.1 [pip3] triton==2.2.0 [conda] Could not collectROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: N/A vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X PIX PHB PHB SYS SYS SYS SYS 0-7,16-23 0 N/A GPU1 PIX X PHB PHB SYS SYS SYS SYS 0-7,16-23 0 N/A GPU2 PHB PHB X PIX SYS SYS SYS SYS 0-7,16-23 0 N/A GPU3 PHB PHB PIX X SYS SYS SYS SYS 0-7,16-23 0 N/A GPU4 SYS SYS SYS SYS X PIX PHB PHB 8-15,24-31 1 N/A GPU5 SYS SYS SYS SYS PIX X PHB PHB 8-15,24-31 1 N/A GPU6 SYS SYS SYS SYS PHB PHB X PIX 8-15,24-31 1 N/A GPU7 SYS SYS SYS SYS PHB PHB PIX X 8-15,24-31 1 N/A

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

How would you like to use vllm

I want to run inference using VLLM with TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ. I get the following error:

{"error":{"code":500,"message":"could not load model (no success): Unexpected err=ValueError('Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your NVIDIA GeForce RTX 2080 Ti GPU has compute capability 7.5. You can use float16 instead by explicitly setting thedtypeflag in CLI, for example: --dtype=half.'), type(err)=\u003cclass 'ValueError'\u003e",

But i cant find a way to set dtype = 'half' inside my vllm.yaml:


name: vllm
backend: vllm
parameters:
  model: "TheBloke/Mixtral-8x7B-Instruct-v0.1-GPTQ"

# Uncomment to specify a quantization method (optional)
quantization: "gptq"
# Uncomment to limit the GPU memory utilization (vLLM default is 0.9 for 90%)
gpu_memory_utilization: 0.7
# Uncomment to trust remote code from huggingface
trust_remote_code: true
# Uncomment to enable eager execution
# enforce_eager: true
# Uncomment to specify the size of the CPU swap space per GPU (in GiB)
# swap_space: 2
# Uncomment to specify the maximum length of a sequence (including prompt and output)
max_model_len: 32000
tensor-parallel-size: 8
cuda: true

I'm using docker.

sudo docker run --rm -ti --gpus all -p 8080:8080 -e DEBUG=true -e MODELS_PATH=/models -e THREADS=1 -v /opt/localai/models:/models --name localai quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg

simon-mo commented 3 months ago

vLLM do not support YAML file deployment at the moment. This might be go-skynet/local-ai's special handling?