vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.18k stars 3.99k forks source link

[Performance]: how to test tensorrt-llm serving correctly #4803

Closed RunningLeon closed 4 months ago

RunningLeon commented 4 months ago

Proposal to improve performance

Hi, how to test tensorrt-llm serving correctly? I've tested on llama2-8b-chat and llama3-8b and the performances are too bad for TTFT. Could you tell me where goes wrong? THX

I use docker image nvcr.io/nvidia/tritonserver:24.04-trtllm-python-py3 and follow this doc https://github.com/triton-inference-server/tensorrtllm_backend/blob/main/docs/llama.md

This is the results for request rate=7

Traffic request rate: 7.0
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100/100 [02:05<00:00,  1.25s/it]
============ Serving Benchmark Result ============
Successful requests:                     100       
Benchmark duration (s):                  125.17    
Total input tokens:                      22925     
Total generated tokens:                  21752     
Request throughput (req/s):              0.80      
Input token throughput (tok/s):          183.14    
Output token throughput (tok/s):         173.77    
---------------Time to First Token----------------
Mean TTFT (ms):                          47395.15  
Median TTFT (ms):                        45446.53  
P99 TTFT (ms):                           96636.63  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          19.38     
Median TPOT (ms):                        18.76     
P99 TPOT (ms):                           25.42     

related issue: https://github.com/triton-inference-server/tensorrtllm_backend/issues/453

Report of performance regression

ran script:

tokenizer=./llama3/Meta-Llama-3-8B-Instruct

python3 benchmarks/benchmark_serving.py \
  --backend tensorrt-llm \
  --endpoint /v2/models/ensemble/generate_stream \
  --model ensemble \
  --tokenizer $tokenizer \
  --dataset-name sharegpt \
  --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json \
  --port 8000 \
  --trust-remote-code \
  --request-rate 7 \
  --num-prompts 100 \

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.28.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB

Nvidia driver version: 535.54.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   43 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          128
On-line CPU(s) list:             0-127
Vendor ID:                       AuthenticAMD
Model name:                      AMD EPYC 7742 64-Core Processor
CPU family:                      23
Model:                           49
Thread(s) per core:              1
Core(s) per socket:              64
Socket(s):                       2
Stepping:                        0
Frequency boost:                 enabled
CPU max MHz:                     2250.0000
CPU min MHz:                     1500.0000
BogoMIPS:                        4500.13
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca
Virtualization:                  AMD-V
L1d cache:                       4 MiB (128 instances)
L1i cache:                       4 MiB (128 instances)
L2 cache:                        64 MiB (128 instances)
L3 cache:                        512 MiB (32 instances)
NUMA node(s):                    8
NUMA node0 CPU(s):               0-15
NUMA node1 CPU(s):               16-31
NUMA node2 CPU(s):               32-47
NUMA node3 CPU(s):               48-63
NUMA node4 CPU(s):               64-79
NUMA node5 CPU(s):               80-95
NUMA node6 CPU(s):               96-111
NUMA node7 CPU(s):               112-127
Vulnerability L1tf:              Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; Load fences, __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full retpoline

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnx==1.15.0rc2
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime==1.16.3
[pip3] optree==0.10.0
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.3.0
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0a0
[pip3] triton==2.3.0
[pip3] tritonclient==2.45.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.2 7.5 8.0 8.6 8.7 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV12    NV12    NV12    NV12    NV12    NV12    NV12    SYS     SYS     SYS     48-63   3               N/A
GPU1    NV12     X      NV12    NV12    NV12    NV12    NV12    NV12    SYS     SYS     SYS     48-63   3               N/A
GPU2    NV12    NV12     X      NV12    NV12    NV12    NV12    NV12    PXB     PXB     SYS     16-31   1               N/A
GPU3    NV12    NV12    NV12     X      NV12    NV12    NV12    NV12    PXB     PXB     SYS     16-31   1               N/A
GPU4    NV12    NV12    NV12    NV12     X      NV12    NV12    NV12    SYS     SYS     PXB     96-111  6               N/A
GPU5    NV12    NV12    NV12    NV12    NV12     X      NV12    NV12    SYS     SYS     PXB     96-111  6               N/A
GPU6    NV12    NV12    NV12    NV12    NV12    NV12     X      NV12    SYS     SYS     SYS     64-79   4               N/A
GPU7    NV12    NV12    NV12    NV12    NV12    NV12    NV12     X      SYS     SYS     SYS     64-79   4               N/A
NIC0    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS      X      PXB     SYS
NIC1    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     PXB      X      SYS
NIC2    SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_bond_0
simon-mo commented 4 months ago

cc @ywang96 if you can help answer

geraldstanje commented 2 months ago

hi @RunningLeon , how did you end up solving this? could you please give some insight?

RunningLeon commented 2 months ago

hi @RunningLeon , how did you end up solving this? could you please give some insight?

@geraldstanje hi, you can refer to this comment https://github.com/triton-inference-server/tensorrtllm_backend/issues/453#issuecomment-2111521451

geraldstanje commented 2 months ago

@RunningLeon thanks - could you kindly post all your parameters you selected for vllm to run - for llama3 8b?