Open iofu728 opened 1 week ago
Hi thanks for your interest,
We'll be adding support for several vLLM features (initially excluded for simplicity) as we work to upstream this work over the next few weeks. TP should be a quick one--I'll update here once it's supported :)
Thanks for your response! Looking forward to your next version!
🚀 The feature, motivation and pitch
Hi folks,
Thank you for your great effort in implementing KV cache compression methods in vLLM. I recently tried running experiments with tensor parallel enabled, and I wanted to ask if there are any plans to support tensor parallel, as it would be very helpful. Thanks again for your work!
Alternatives
No response
Additional context
Before submitting a new issue...