vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.19k stars 4.74k forks source link

[Misc]: TTFT profiling with respect to prompt length #7635

Open luowenjie14 opened 3 months ago

luowenjie14 commented 3 months ago

Anything you want to discuss about vllm.

I am profiling TTFT and TPOT on my machine, I could not explain the behavior of TTFT thus opened this issue to seek for advice.

Below figure shows the TTFTs with respect to prompt length on my machine, the test condition is as below:

steps taken for TTFT and TPOT profiling:

  1. start the OpenAI API-compatible server using: python -m vllm.entrypoints.openai.api_server --args
  2. iterative running benchmark_serving.py to get the TTFT and TPOT, each time only send a request to server to eliminate the effect of waiting time

The profiled TTFT is as below: Observation 1: when the prompt length is less than 400, the TTFT seems to be a flat value ~100ms. This value is consistent across different TP settings (tried TP=1, TP=2 and TP=4). Observation 2: When prompt length is greater than 400, TTFT is linear to prompt length. This result is inline with Figure 6b this paper (https://arxiv.org/pdf/2405.06856).

I don't understand the result of observation 1, can anyone provide some insight on this result? What is the reason causingTTFT a horizontal line when the prompt length is less than 400? ttft

ywang96 commented 3 months ago

There are a few things to discuss here:

  1. ShareGPT is a dataset with prompts of different lengths. I suggest benchmarking with sonnet dataset or random dataset where you can specify the length of all prompts.
  2. What's your default args for launching the model server? For Llama3-8B, something to keep in mind is that chunked prefill is enabled by default for model with long context window (>32768).
  3. TTFT measure the overall latency to the first token, so there's some static level of latency (though it definitely wouldn't be as high as 100ms) introduced from other components of the server (API server, scheduler, preprocessing, etc).
luowenjie14 commented 3 months ago

Thank you @ywang96 for suggestion, here is some additional info from me: For point 1: I do sample the request with specified prompt length from shareGPT, for example, if I test prompt_length 100, I sample the request whose prompt length is closest to it for Point 2: since I only test 1 request each time, will the chunked prefill affect the result? In addition, the total length of context size of tested request is less than 32768 For point 3: I do measured the http API server's latency (~8ms), so it does not contribute too much to it, I think I will evaluate the scheduler and preprocessing time to get a cleaner ttft.

George-ao commented 3 months ago

Probably it enters the compute-bound region when the prompt length reaches 400.

github-actions[bot] commented 2 weeks ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!