AIoT-MLSys-Lab / Efficient-LLMs-Survey

[TMLR 2024] Efficient Large Language Models: A Survey
https://arxiv.org/abs/2312.03863
1.03k stars 86 forks source link

Suggest for incorporating a speculative decoding paper #36

Open smart-lty opened 2 months ago

smart-lty commented 2 months ago

Thanks for your great work! I wanted to bring to your attention our recent work PEARL, a parallel sepculative decoding framework to achieve adaptive draft length. It has shown significant speedup for LLM inference without any training / fine-tuning. It can be also seen as an effective method to reduce the draft model overheads in speculative decoding. We believe that integrating PEARL into your repository could provide substantial benefits to your users. I have attached a link to our paper and codebase below for your reference.

paper: https://arxiv.org/abs/2408.11850 blog: https://pearl-code.github.io/ code: https://github.com/smart-lty/ParallelSpeculativeDecoding

SUSTechBruce commented 2 months ago

Thanks. Can you add this paper using this "Paper Title, Conference/Journal/Preprint, Year [pdf] [other resources]."

smart-lty commented 2 months ago

Sure! Parallel Speculative Decoding with Adaptive Draft Length, arXiv, 2024. [pdf] [code] [blog]