HKUNLP / ChunkLlama

[ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"
Apache License 2.0
341 stars 18 forks source link

How do I use it in vllm deployment #3

Open jchang98 opened 7 months ago

jchang98 commented 7 months ago

How can I use this approach in vllm deployment without training,can you give me a specific example. thx

ChenxinAn-fdu commented 7 months ago

Thank you for bringing this to our attention. Unfortunately, the current version of vLLM does not support the return of attention scores. However, we are pleased to inform you that this functionality is planned in the next release of the software.

In the meantime, we are working diligently to implement paged attention—the key feature of vLLM—as well as Flash decoding. These enhancements aim to accelerate the generation process and decrease the GPU memory of the KV cache.

we appreciate your patience while we work on these developments. Stay tuned for updates.

jchang98 commented 7 months ago

@ChenxinAn-fdu OK, thanks for your response

ChenxinAn-fdu commented 6 months ago

I have pushed the code for flash decoding and it significantly decreases the memory consumption for decoding with KV-cache. It may be helpful for you.

skyshine102 commented 5 months ago

looking forward to the support in vllm!

Shuai-Xie commented 5 months ago

@ChenxinAn-fdu Dose vllm support DCA now? We'd like to use this feature in the deployment.

ChenxinAn-fdu commented 5 months ago

@Shuai-Xie Hi, I left an issue in their official repo, but it seems that the current version of vllm only supports returning the output tensor without softmax_lse. We plan to implement it ourselves.

If you do not need continual batching, the current repo has implemented flash_decoding. You can use it for some preliminary experiments.