Open LiuXiaoxuanPKU opened 1 month ago
@LiuXiaoxuanPKU Hi, what is the acceptance rate based on your tests? I trained a draft model of DeepSeek-v2, and the acceptance rate from testing is less than 20%. Maybe you should use meta-llama/Meta-Llama-3-70B-Instruct to match with the draft model.
@LiuXiaoxuanPKU Thanks for sharing the interesting result :) But, this issue looks focused on system-side optimizations. In the PR that has introduced Eagle, there's a discussion about the low acceptance rate when using large k. https://github.com/vllm-project/vllm/pull/6830/files#r1710769971 Does this acceptance rate issue still exist? How was the acceptance rate in your experiment @LiuXiaoxuanPKU ?
The eagle model looks inconsistent with the implementation version of the paper. The version implemented in this paper lacks two rms_norm operations.
@LiuXiaoxuanPKU Thanks for sharing this interesting result; I’m very interested in it as well.
However, can yuhuili/EAGLE-LLaMA3-Instruct-70B be directly used as a draft model? In my experiments, I found it necessary to convert the trained EAGLE checkpoint to a vLLM-compatible version, similar to the process described here: eagle.py. However, after conversion, the draft model’s parameter size increased significantly (from 1.55GB to 3.4048GB), which consumed a substantial amount of GPU memory and considerably extended the draft model’s computation time (with the average_time_per_proposal_tok_ms reaching nearly 4 ms).
Additionally, when using meta-llama/Meta-Llama-3-8B-Instruct as the target model and the converted yuhuili/EAGLE-LLaMA3-Instruct-8B as the draft model, I observed that with num_speculative_tokens set to 3, the acceptance rate was only around 29.6%.
Some preliminary acceptance rate on ShareGPT with llama3-70B with the help @OliviaMimiChen: Number of speculative tokens = 1 Speculative metrics: Draft acceptance rate: 0.489, System efficiency: 0.744, Number of speculative tokens: 1, Number of accepted tokens: 579528, Number of draft tokens: 1185420, Number of emitted tokens: 1764948. Number of speculative tokens = 2 Speculative metrics: Draft acceptance rate: 0.341, System efficiency: 0.530, Number of speculative tokens: 2, Number of accepted tokens: 736026, Number of draft tokens: 2157850, Number of emitted tokens: 1714732. Number of speculative tokens = 3 Speculative metrics: Draft acceptance rate: 0.282, System efficiency: 0.405, Number of speculative tokens: 3, Number of accepted tokens: 883021, Number of draft tokens: 3135405, Number of emitted tokens: 1693914.
I think the numbers are wired because:
We are in the process of (1) debugging acceptance rate (2) nsys profiling to understand the overhead of each part
We will keep you guys posted, any discussion/comments are appeciated!
the acceptance rate changes based on the number of spec token, which is not expected. Acceptance rate should not be affected by the propose length
Maybe I am missing something but isn't this expected? Generally, a draft model's ability to predict tokens at later time steps becomes worse. So if a draft model is predicting 2 spec tokens, and it gets just the first token right, the acceptance rate will be 0.5 where as if a draft model is predicting 3 spec tokens, and it gets just the first token right, the acceptance rate will be 0.33
acceptance rate is much smaller than system efficiency. This is also wired as normally, acceptance rate should be higher.
I believe this is mostly because bonus token is included in the calculation of system efficiency where as it is not included in acceptance rate.
the acceptance rate changes based on the number of spec token, which is not expected. Acceptance rate should not be affected by the propose length
Maybe I am missing something but isn't this expected? Generally, a draft model's ability to predict tokens at later time steps becomes worse. So if a draft model is predicting 2 spec tokens, and it gets just the first token right, the acceptance rate will be 0.5 where as if a draft model is predicting 3 spec tokens, and it gets just the first token right, the acceptance rate will be 0.33
acceptance rate is much smaller than system efficiency. This is also wired as normally, acceptance rate should be higher.
I believe this is mostly because bonus token is included in the calculation of system efficiency where as it is not included in acceptance rate.
It might be a bit confusing. In vllm, the token acceptance rate also includes tokens after 'wrong prediction'. For example, if 1 means the token is accepted, 0 means the token is not accepted, and after proposing 4 tokens, we have an acceptance vector of [1, 0, 1, 0], the token acceptance rate is 2 / 4 = 0.5, the system efficiency is (1+1) / (4+1) = 0.4. For system efficiency, 1+1 means the accepted token + the bonus token, 4 + 1 is the maximum number of tokens that can be generated in this forward pass. Ideally, the acceptance rate should be an independent of the proposed length because it models draft model's capability to mimic the target model. On the other hand, system efficiency is affected by the proposed length. Let me know if there are more questions, thanks!
the acceptance rate changes based on the number of spec token, which is not expected. Acceptance rate should not be affected by the propose length
Maybe I am missing something but isn't this expected? Generally, a draft model's ability to predict tokens at later time steps becomes worse. So if a draft model is predicting 2 spec tokens, and it gets just the first token right, the acceptance rate will be 0.5 where as if a draft model is predicting 3 spec tokens, and it gets just the first token right, the acceptance rate will be 0.33
acceptance rate is much smaller than system efficiency. This is also wired as normally, acceptance rate should be higher.
I believe this is mostly because bonus token is included in the calculation of system efficiency where as it is not included in acceptance rate.
It might be a bit confusing. In vllm, the token acceptance rate also includes tokens after 'wrong prediction'. For example, if 1 means the token is accepted, 0 means the token is not accepted, and after proposing 4 tokens, we have an acceptance vector of [1, 0, 1, 0], the token acceptance rate is 2 / 4 = 0.5, the system efficiency is (1+1) / (4+1) = 0.4. For system efficiency, 1+1 means the accepted token + the bonus token, 4 + 1 is the maximum number of tokens that can be generated in this forward pass. Ideally, the acceptance rate should be an independent of the proposed length because it models draft model's capability to mimic the target model. On the other hand, system efficiency is affected by the proposed length. Let me know if there are more questions, thanks!
I appreciate your insights, @LiuXiaoxuanPKU. However, in Eagle and most speculative decoding methods, the proposal process still follows an autoregressive pattern, where the prediction of each subsequent token depends on the information from the previously predicted token. From my understanding, if an earlier token is predicted incorrectly (deviating from the target model), it is highly likely that subsequent tokens will also be predicted incorrectly. With this in mind, it appears problematic to calculate the acceptance rate for each token position independently, regardless of the proposal length.
Additionally, I am very interested in understanding the time overhead associated with Eagle’s proposal stage. Would you be able to share any relevant test data? In my own testing, this overhead has been substantial, and as I understand it, this cost cannot be hidden and significantly impacts the efficiency of speculative decoding.
I observed 20% lower acceptance length numbers compared to the official EAGLE code using LLaMA3-Instruct 8B as base model and abhigoyal/EAGLE-LLaMA3-Instruct-8B-vllm as draft model. I noticed that the vLLM EAGLE model code has 2 key differences compared to the official EAGLE model code:
After fixing these two issues, the acceptance length of vLLM is now very close to the official EAGLE code.
Proposal to improve performance
The spec dec performance of Eagleis worse than expected as shown below:
Model: meta-llama/Meta-Llama-3.1-70B-Instruct Draft model: yuhuili/EAGLE-LLaMA3-Instruct-70B Hardware: 4xH100 Target model TP=4 Dataset: ShareGPT vllm version: v0.6.1.post2
Even at low QPS, the performance is far from 2x speedup reported in the original eagle paper (light blue line is the original, the solid lines are with SD). We need to understand the performance gap here. Possible reasons include but not limited to
Profiling is required to understand the issue. Open this issue to track the progress.
Report of performance regression
No response
Misc discussion on performance
No response
Your current environment (if you think it is necessary)
Before submitting a new issue...