vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.74k stars 4.1k forks source link

[Feature]: Request for SmartSpec Method Support #5886

Closed bong-furiosa closed 3 months ago

bong-furiosa commented 3 months ago

🚀 The feature, motivation and pitch

Recently, we read a paper where the vLLM team proposed a method called SmartSpec. We believe that the research, which dynamically adjusts the speculation length in a commercialized LLM serving system, is superior in terms of practicality compared to existing dynamic speculative length studies.

This idea could be applied to the current vLLM speculative decoding with Batch Expansion enabled, and it might also be applicable to future versions of vLLM with Batch Expansion disabled. (I am curious whether the SmartSpec research was conducted on vLLM with Batch Expansion enabled. :thinking:)

I wonder if the SmartSpec method will be implemented into the main repository in the near future.

Alternatives

No response

Additional context

No response

LiuXiaoxuanPKU commented 3 months ago

Hi @bong-furiosa, thanks for the attention!

Yes, we implemented SmartSpec on top of vllm with batch expansion in a forked version. We will integrate SmartSpec to vllm very soon. The first step is to remove batch expansion (#5691). In the meantime, we also need the community effort to improve speculative decoding performance (#4630) and implement tree-style speculative decoding(#4978). SmartSpec (#4565) is very lightweight and can be implemented quickly. After all above mentioned steps, we should see similar performance as described in the paper.

bong-furiosa commented 3 months ago

Since we have received a detailed response, we will close this issue. We are very looking forward to seeing further developments in vLLM!