Open jchang98 opened 7 months ago
Thank you for bringing this to our attention. Unfortunately, the current version of vLLM does not support the return of attention scores. However, we are pleased to inform you that this functionality is planned in the next release of the software.
In the meantime, we are working diligently to implement paged attention—the key feature of vLLM—as well as Flash decoding. These enhancements aim to accelerate the generation process and decrease the GPU memory of the KV cache.
we appreciate your patience while we work on these developments. Stay tuned for updates.
@ChenxinAn-fdu OK, thanks for your response
I have pushed the code for flash decoding and it significantly decreases the memory consumption for decoding with KV-cache. It may be helpful for you.
looking forward to the support in vllm!
@ChenxinAn-fdu Dose vllm support DCA now? We'd like to use this feature in the deployment.
@Shuai-Xie Hi, I left an issue in their official repo, but it seems that the current version of vllm only supports returning the output tensor without softmax_lse
. We plan to implement it ourselves.
If you do not need continual batching, the current repo has implemented flash_decoding
. You can use it for some preliminary experiments.
How can I use this approach in vllm deployment without training,can you give me a specific example. thx