vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
30.85k stars 4.69k forks source link

[Feature]: Enable Prefix caching kernel on Pallas for TPU backend #7607

Open miladm opened 3 months ago

miladm commented 3 months ago

🚀 The feature, motivation and pitch

Enable Prefix caching kernel on Pallas for TPU backend

According to @WoosukKwon, we have a Triton and CUDA kernel implementations.

github-actions[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!