Closed zankner closed 10 months ago
Top-1 accuracy is just under 0.8. We will provide detailed experimental results and analysis in the paper.
Any chance you would be able to share the plot of accuracy throughout training? I'm trying to replicate the results in my own codebase and getting performance around 0.4 so assuming I'm doing something wrong.
Of course, you can see the plot for Vicuna 13B in https://api.wandb.ai/links/yuhui-li/kgzd2kc8.
@Liyuhui-12
With my understanding, "test/top_1_acc" is top_1 accuracy of EAGLE head accuracy on next token prediction and "test/0_acc" also designate same accuracy. But there's difference in metrics. I guess my understanding is wrong could you tell me the difference between "test/top_1_acc", "test/0_acc"
They are roughly the same, the difference is that test/0_acc does not calculate the tokens in the dataset that are inconsistent with the original LLM predictions
can you provide code for testing head_accuracy like the medusa project?https://github.com/FasterDecoding/Medusa/blob/v1.0-prerelease/medusa/eval/heads_accuracy.py
Hello, I was just wondering what your top-1 accuracy is for eagle heads.