stanfordnlp / pyreft

ReFT: Representation Finetuning for Language Models
https://arxiv.org/abs/2404.03592
Apache License 2.0
947 stars 77 forks source link

[P1] Location of code for "LM training and serving with ReFT" #71

Open RonanKMcGovern opened 2 months ago

RonanKMcGovern commented 2 months ago

The ReadMe mentions the ability to serve at scale with continuous batching.

Even if not vLLM or TGI, is there some work that someone could point me to on this?

Is there any functioning packaging for serving continuous batching via an endpoint? Thanks

frankaging commented 2 months ago

@RonanKMcGovern Thanks for the note! The continuous batching is still in planning right now, we might do it soon (after our conf submission deadline).

I am thinking to implement it not as a generic API, but a pyreft feature where intervention locations on KV cache requires some special handlings (since inputs are concatenated together in that case)

RonanKMcGovern commented 2 months ago

Noted with thanks @frankaging .

Yes, I was thinking that there'll need to be a separate "inputs" field for the intervention.

This may be a bit wild of an idea, but I wonder if the interventions could be input as images are in vLLM and TGI. It's a case where there needs to be a second "pre-processor".