vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.04k stars 4.72k forks source link

[Feature]: Multi-Proposers support for speculative decoding. #6300

Closed ShangmingCai closed 5 days ago

ShangmingCai commented 4 months ago

🚀 The feature, motivation and pitch

Speculative decoding has demonstrated significant potential in efficiently generating proposals and utilizing idle computing power to expedite the auto-regression decoding process, particularly under lightweight workloads. Thanks to the remarkable work by @cadedaniel, we have verified the latency benefits brought by speculative decoding on the latest version of vllm.

We have observed the following points that we believe could further enhance the utility of speculative decoding:

Apart from these observations, we were particularly interested in your latest work on speculative length scheduling for different workload scenarios (#5886) Optimizing Speculative Decoding for Serving Large Language Models Using Goodput.

This led us to wonder if vllm could be enhanced to support multiple proposers and provide the flexibility to schedule them appropriately. Alternatively, enabling users to specify the proposer for different requests via SamplingParams could also be a viable solution.

We believe this enhancement could unlock greater potential and adaptivity for vllm's speculative decoding capabilities. We are working on an inner forked version to verify whether we can achieve a higher goodput.

Thanks, feel free to leave a message to let us know what you think of it.

Alternatives

No response

Additional context

No response

cadedaniel commented 4 months ago

These are great ideas! Contributions welcome :)

ShangmingCai commented 4 months ago

These are great ideas! Contributions welcome :)

Thanks! Once we have successfully verified the performance improvement in the inner version, I will submit a PR to begin integrating this feature into the open-source repository. Will keep you updated about our progress.

Also, if there is any progress in the integration of SmartSpec, please let me know. cc @LiuXiaoxuanPKU

Feel free to contact me anytime if any changes or additions you would like!

cadedaniel commented 4 months ago

One other idea you should consider is using multi-lora draft model

ShangmingCai commented 4 months ago

One other idea you should consider is using multi-lora draft model

Brilliant! The design philosophy of multi-proposers is similar to that of multiple LoRA support. Also, the choice should not be set through sampling_params, but be left to the service provider for autonomous scheduling in the generate() function like LoRA.

Although SpecDecodeWorker does not support LoRA at this stage, I will keep the combination of spec decode and LoRA in mind and advance it step by step :)

cadedaniel commented 4 months ago

Sounds good. Btw I don't think we should let users decide the spec method as it gives too much flexibility to impact other users -- should be set by service provider

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] commented 5 days ago

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!