Open RonanKMcGovern opened 2 months ago
@RonanKMcGovern Thanks for the note! The continuous batching is still in planning right now, we might do it soon (after our conf submission deadline).
I am thinking to implement it not as a generic API, but a pyreft feature where intervention locations on KV cache requires some special handlings (since inputs are concatenated together in that case)
Noted with thanks @frankaging .
Yes, I was thinking that there'll need to be a separate "inputs" field for the intervention.
This may be a bit wild of an idea, but I wonder if the interventions could be input as images are in vLLM and TGI. It's a case where there needs to be a second "pre-processor".
The ReadMe mentions the ability to serve at scale with continuous batching.
Even if not vLLM or TGI, is there some work that someone could point me to on this?
Is there any functioning packaging for serving continuous batching via an endpoint? Thanks