microsoft / aici

AICI: Prompts as (Wasm) Programs
MIT License
1.87k stars 76 forks source link

[RFC] drop pre/post callback, only leave mid #67

Closed mmoskal closed 4 months ago

mmoskal commented 4 months ago

The pre_process and post_process callbacks currently run in the critical path of inference - we have measured the overhead at about 0.3ms per token in rLLM, however it may be worse with Python-based LLM infrastructure. The overhead is primarily the inter-process communication delay (especially the fact that the OS can decide the de-schedule one of the involved processes).

A solution would be only leave the mid_process callback, and add possible return values for it that indicate that the further generation needs to be forked, or that the current token needs to be discarded (it's kind of supported already via backtrack=1).

The downside is that certain operations may incur a one-token overhead in some cases:

Many of these can be mitigated to some extent (eg., when requesting fork we could return splice commands for each branch, when returning a small set of allowed tokens, we could say "if token X is selected, then fast-forward by YZW).

It would be also impossible to directly implement lock-step generation between different forks, making certain beam-search approaches harder.

The advantage is much simpler interface and no overhead, which might be easier sell for LLM infrastructure folks.