vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.85k stars 4.69k forks source link

[Bug]: llama 3.1 70b vs mistral-large generation speed #7228

Open ccdv-ai opened 3 months ago

ccdv-ai commented 3 months ago

Your current environment

Tested with both v0.5.3.post1 and v0.5.4.

Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.6
Libc version: glibc-2.35

Python version: 3.9.19 (main, May  6 2024, 19:43:03)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40

Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        52 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               64
On-line CPU(s) list:                  0-63
Vendor ID:                            AuthenticAMD
Model name:                           AMD EPYC 9124 16-Core Processor
CPU family:                           25
Model:                                17
Thread(s) per core:                   2
Core(s) per socket:                   16
Socket(s):                            2
Stepping:                             1
BogoMIPS:                             5991.11
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization:                       AMD-V
L1d cache:                            1 MiB (32 instances)
L1i cache:                            1 MiB (32 instances)
L2 cache:                             32 MiB (32 instances)
L3 cache:                             128 MiB (8 instances)
NUMA node(s):                         2
NUMA node0 CPU(s):                    0-15,32-47
NUMA node1 CPU(s):                    16-31,48-63
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Mitigation; safe RET
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] flashinfer==0.1.1+cu121torch2.3
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] pyzmq==26.0.3
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.43.3
[pip3] triton==2.3.1
[pip3] zmq==0.0.0
[conda] blas                      1.0                         mkl  
[conda] flashinfer                0.1.1+cu121torch2.3          pypi_0    pypi
[conda] mkl                       2023.1.0         h213fc3f_46344  
[conda] mkl-service               2.4.0            py39h5eee18b_1  
[conda] mkl_fft                   1.3.8            py39h5eee18b_0  
[conda] mkl_random                1.2.4            py39hdb19cb5_0  
[conda] numpy                     1.26.4           py39h5f9d8c6_0  
[conda] numpy-base                1.26.4           py39hb5e798b_0  
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] pyzmq                     26.0.3                   pypi_0    pypi
[conda] torch                     2.3.1                    pypi_0    pypi
[conda] torchvision               0.18.1                   pypi_0    pypi
[conda] transformers              4.43.3                   pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypi
[conda] zmq                       0.0.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    SYS     SYS     0-15,32-47      0               N/A
GPU1    NODE     X      SYS     SYS     0-15,32-47      0               N/A
GPU2    SYS     SYS      X      NODE    16-31,48-63     1               N/A
GPU3    SYS     SYS     NODE     X      16-31,48-63     1               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

Tested with both v0.5.3.post1 and v0.5.4.

Comparing 3 models: Meta-Llama-3.1-70B-Instruct-FP8,llama-3.1-70b-instruct-awq and Mistral-Large-Instruct-2407-AWQ. Afaik, Mistral Large shares the same architecture as Mistral Nemo which is supported.

I test using this config on a 4 x L40 rig (192gb VRAM):

python -u -m vllm.entrypoints.openai.api_server \
    --host 0.0.0.0 \
    --model models/Meta-Llama-3.1-70B-Instruct-FP8 \
    --dtype "auto" \
    --port 8000 \
    --seed 123 \
    --max-model-len 32768 \
    --gpu-memory-utilization 0.92 \
    --tensor-parallel-size 4 \
    --max-num-seqs 32 \
    --use-v2-block-manager \
    --max-log-len 20 \
    --served-model-name llama
I'm getting strange results. Llama FP8 Llama AWQ Mistral-L AWQ
# Params 70B 70B 124B
# Tokenizer 128k 128k 32k
Size ~70Gb ~35Gb ~60Gb
Avg ctx ~6k ~6k ~8k
Gen speed ~80-100 t/s ~80-100 t/s 200-250 t/s

Mistral Large also has random quality drops and often fails with guided_json (empty json). Tested with different awq quants. Double checked configs.

Is Mistral-Large not supported at all?

Playerrrrr commented 2 months ago

any updates? @jon-chuang

qeternity commented 2 months ago

Confirmed. There is something wrong with Mistral Large.