vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.31k stars 4.59k forks source link

[Performance]: How to measure and understand the performance benefit of prefix caching #7518

Closed PanJason closed 3 days ago

PanJason commented 3 months ago

Proposal to improve performance

No response

Report of performance regression

No response

Misc discussion on performance

I am trying to understand how prefix caching helps with prompts of different lengths, so I ran the benchmark_prefix_caching.py with different prompt lengths (basically PROMPT * n in the code). I also changed the num_prompts to 1 and set the output_len to 1. I also changed the model_runner.py to understand where the benefit comes from. IMO, the benefit should come from model execution because prefix caching reduces the complexity of attention of O(n^2) to O(n). I added the following timestamps for my understanding:

diff --git a/vllm/worker/model_runner.py b/vllm/worker/model_runner.py
index f9c26e0c..fdac9af4 100644
--- a/vllm/worker/model_runner.py
+++ b/vllm/worker/model_runner.py
@@ -1360,6 +1360,7 @@ class ModelRunner(GPUModelRunnerBase[ModelInputForGPUWithSamplingMetadata]):
             "finished_requests_ids": model_input.finished_requests_ids,
             "request_ids_to_seq_ids": model_input.request_ids_to_seq_ids,
         } if self.has_seqlen_agnostic else {}
+        t0 = time.perf_counter()
         hidden_or_intermediate_states = model_executable(
             input_ids=model_input.input_tokens,
             positions=model_input.input_positions,
@@ -1369,17 +1370,23 @@ class ModelRunner(GPUModelRunnerBase[ModelInputForGPUWithSamplingMetadata]):
             **MultiModalInputs.as_kwargs(multi_modal_kwargs,
                                          device=self.device),
             **seqlen_agnostic_kwargs)
+        t1 = time.perf_counter()
+        print(f"execute time: {t1 - t0}")

+        t0 = time.perf_counter()
         # Compute the logits in the last pipeline stage.
         if not get_pp_group().is_last_rank:
             return hidden_or_intermediate_states

         logits = self.model.compute_logits(hidden_or_intermediate_states,
                                            model_input.sampling_metadata)
+        t1 = time.perf_counter()
+        print(f"compute logits time: {t1 - t0}")

         if not self.is_driver_worker:
             return []

+        t0 = time.perf_counter()
         # Sample the next token.
         output: SamplerOutput = self.model.sample(
             logits=logits,
@@ -1399,6 +1406,8 @@ class ModelRunner(GPUModelRunnerBase[ModelInputForGPUWithSamplingMetadata]):
                 hidden_states = hidden_or_intermediate_states

             output.hidden_states = hidden_states
+        t1 = time.perf_counter()
+        print(f"sample time: {t1 - t0}")

         return [output]

Basically, to get the time spent on model execution, logit computation, and sampling, respectively.

I set n=40 (prompt len = 30121) and tested a llama-7b variant which supports a long context window. The commands I ran are as follows:

python3 benchmarks/benchmark_prefix_caching.py --model=togethercomputer/Llama-2-7B-32K-Instruct  --output-len=1 --enable-prefix-caching
python3 benchmarks/benchmark_prefix_caching.py --model=togethercomputer/Llama-2-7B-32K-Instruct  --output-len=1

The result I got when I enabled prefix caching is:

execute time: 0.011940602911636233
compute logits time: 6.694509647786617e-05
sample time: 0.022152308141812682
cost time 0.16930818557739258

Without prefix caching, it is:

execute time: 0.01532919192686677
compute logits time: 9.334413334727287e-05
sample time: 2.9377362160012126
cost time 2.982032060623169

I wonder why the sample time is decreased dramatically rather than the time when the model_executable is invoked. I further identified the place in sampler.py which costs the most time. By default, vllm uses the greedy sampler and the following place in greedy_sampler is the most expensive:

    samples_lst = samples.tolist()

This is very different from what I thought. One reason I can think of is that there is a future which is only invoked when tolist() is called and the model executable is not really executing the cuda kernel synchronously. I guess this may relate to cuda graph but I saw the above result both with cuda graph enabled and disabled.

I also tried different models (llama-2-7b, llama-2-13b) with different prompt lengths. I saw similar issues happening where the most time reduction comes from sampling rather than model_executable so I am really confused now. Is there anyone who has looked into the execution with prefix caching in detail? Thanks a lot!

Your current environment (if you think it is necessary)

Collecting environment information...                                                                                                                                                                                                                                                                               [43/60927]
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)                                                                                                                                                                                                                                                                                               GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0                                                                                                                                                                                                                                                                            Clang version: Could not collect                                                                                                                                                                                                                                                                                              CMake version: version 3.30.2
Libc version: glibc-2.31

Python version: 3.10.14 (main, Apr  6 2024, 18:45:05) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB

Nvidia driver version: 525.147.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      48 bits physical, 48 bits virtual
CPU(s):                             128
On-line CPU(s) list:                0-127
Thread(s) per core:                 2
Core(s) per socket:                 32
Socket(s):                          2
NUMA node(s):                       8
Vendor ID:                          AuthenticAMD
CPU family:                         25
Model:                              1
Model name:                         AMD EPYC 7543 32-Core Processor
Stepping:                           1
CPU MHz:                            2794.820
BogoMIPS:                           5589.64
Virtualization:                     AMD-V
L1d cache:                          2 MiB
L1i cache:                          2 MiB
L2 cache:                           32 MiB
L3 cache:                           512 MiB
NUMA node0 CPU(s):                  0-7,64-71
NUMA node1 CPU(s):                  8-15,72-79
NUMA node2 CPU(s):                  16-23,80-87
NUMA node3 CPU(s):                  24-31,88-95
NUMA node4 CPU(s):                  32-39,96-103
NUMA node5 CPU(s):                  40-47,104-111
NUMA node6 CPU(s):                  48-55,112-119
NUMA node7 CPU(s):                  56-63,120-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe p
opcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a
 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pk
u ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] pyzmq==26.1.0
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.44.0
[pip3] triton==3.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.4
vLLM Build Flags:
CUDA Archs: 5.2 6.0 6.1 7.0 7.5 8.0 8.6 9.0+PTX; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    NIC0    CPU Affinity    NUMA Affinity
GPU0     X      NV12    SYS     40-47,104-111   5
GPU1    NV12     X      SYS     40-47,104-111   5
NIC0    SYS     SYS      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_bond_0
PanJason commented 3 months ago

We also ran llava2 on A20 and saw a similar case

PanJason commented 3 months ago

I think we have found the reason. This is because all the pytorch cuda operations are asynchronous which means the torch functions are just wrappers to launch the kernel and torch ensures the correct order with synchronization. See more details in Asynchronous execution in pytorch documentation. This teach us a lesson: for all torch cuda related operations, use torch.cuda.event to measure time and one should never use time.perf_counter() which is only for end2end measurement

github-actions[bot] commented 3 days ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!