Thanks for your great work! I wanted to bring to your attention our recent work PEARL, a parallel sepculative decoding framework to achieve adaptive draft length. It has shown significant speedup for LLM inference without any training / fine-tuning. It can be also seen as an effective method to reduce the draft model overheads in speculative decoding. We believe that integrating PEARL into your repository could provide substantial benefits to your users. I have attached a link to our paper and codebase below for your reference.
Thanks for your great work! I wanted to bring to your attention our recent work PEARL, a parallel sepculative decoding framework to achieve adaptive draft length. It has shown significant speedup for LLM inference without any training / fine-tuning. It can be also seen as an effective method to reduce the draft model overheads in speculative decoding. We believe that integrating PEARL into your repository could provide substantial benefits to your users. I have attached a link to our paper and codebase below for your reference.
paper: https://arxiv.org/abs/2408.11850 blog: https://pearl-code.github.io/ code: https://github.com/smart-lty/ParallelSpeculativeDecoding