IsaacRe / vllm-kvcompress

KV cache compression for high-throughput LLM inference
Apache License 2.0
63 stars 4 forks source link

[Feature]: Support tensor parallel #1

Open iofu728 opened 1 week ago

iofu728 commented 1 week ago

🚀 The feature, motivation and pitch

Hi folks,

Thank you for your great effort in implementing KV cache compression methods in vLLM. I recently tried running experiments with tensor parallel enabled, and I wanted to ask if there are any plans to support tensor parallel, as it would be very helpful. Thanks again for your work!

Alternatives

No response

Additional context

  File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 2089, in __post_init__
    self.cache_config.verify_with_parallel_config(self.parallel_config)
  File "/home/aiscuser/vllm-kvcompress/vllm/config.py", line 703, in verify_with_parallel_config
    raise ValueError("KV-Compress with multi-GPU not yet supported")
ValueError: KV-Compress with multi-GPU not yet supported

Before submitting a new issue...

IsaacRe commented 1 week ago

Hi thanks for your interest,

We'll be adding support for several vLLM features (initially excluded for simplicity) as we work to upstream this work over the next few weeks. TP should be a quick one--I'll update here once it's supported :)

iofu728 commented 1 week ago

Thanks for your response! Looking forward to your next version!