vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.47k stars 4.42k forks source link

[Performance]: vllm Eagle performance is worse than expected #9565

Open LiuXiaoxuanPKU opened 1 week ago

LiuXiaoxuanPKU commented 1 week ago

Proposal to improve performance

The spec dec performance of Eagleis worse than expected as shown below:

Model: meta-llama/Meta-Llama-3.1-70B-Instruct Draft model: yuhuili/EAGLE-LLaMA3-Instruct-70B Hardware: 4xH100 Target model TP=4 Dataset: ShareGPT vllm version: v0.6.1.post2

Screenshot 2024-10-21 at 3 08 39 PM

Even at low QPS, the performance is far from 2x speedup reported in the original eagle paper (light blue line is the original, the solid lines are with SD). We need to understand the performance gap here. Possible reasons include but not limited to

  1. Miss tree verification kernel: For each position, we are choosing token from the top1 candidate instead of topk candidates. The reason is that we have not integrated tree verification kernel.
  2. System overhead: unnecessary GPU/CPU communication somewhere.
  3. We are testing on ShareGPT dataset while the heads are not finetuned on the same dataset.

Profiling is required to understand the issue. Open this issue to track the progress.

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

Before submitting a new issue...

fengyang95 commented 1 week ago

@LiuXiaoxuanPKU Hi, what is the acceptance rate based on your tests? I trained a draft model of DeepSeek-v2, and the acceptance rate from testing is less than 20%. Maybe you should use meta-llama/Meta-Llama-3-70B-Instruct to match with the draft model.

wooyeonlee0 commented 4 days ago

@LiuXiaoxuanPKU Thanks for sharing the interesting result :) But, this issue looks focused on system-side optimizations. In the PR that has introduced Eagle, there's a discussion about the low acceptance rate when using large k. https://github.com/vllm-project/vllm/pull/6830/files#r1710769971 Does this acceptance rate issue still exist? How was the acceptance rate in your experiment @LiuXiaoxuanPKU ?

Lin-Qingyang-Alec commented 1 day ago

The eagle model looks inconsistent with the implementation version of the paper. The version implemented in this paper lacks two rms_norm operations.

bettybaii commented 4 hours ago

@LiuXiaoxuanPKU Thanks for sharing this interesting result; I’m very interested in it as well.

However, can yuhuili/EAGLE-LLaMA3-Instruct-70B be directly used as a draft model? In my experiments, I found it necessary to convert the trained EAGLE checkpoint to a vLLM-compatible version, similar to the process described here: eagle.py. However, after conversion, the draft model’s parameter size increased significantly (from 1.55GB to 3.4048GB), which consumed a substantial amount of GPU memory and considerably extended the draft model’s computation time (with the average_time_per_proposal_tok_ms reaching nearly 4 ms).

Additionally, when using meta-llama/Meta-Llama-3-8B-Instruct as the target model and the converted yuhuili/EAGLE-LLaMA3-Instruct-8B as the draft model, I observed that with num_speculative_tokens set to 3, the acceptance rate was only around 29.6%.