robertknight / rten

ONNX neural network inference engine
124 stars 9 forks source link

Reserve KV cache capacity after the first model run #408

Closed robertknight closed 3 days ago

robertknight commented 3 days ago

Hugging Face models with separate branches for the first and subsequent iterations do not use the input KV cache buffer on the first run. Thus they did not benefit from the pre-allocated capacity and ended up re-allocating a new KV cache buffer on each run.

To resolve this, change the KV cache growth strategy to grow the buffer after the model runs, if the capacity limit has been reached. Also replace the hard-coded capacity with a growth strategy that doubles the capacity each time. This amortizes the costs of copying the old KV cache into the new buffer.

This impacts the Whisper and TrOCR examples. For the Whisper example, this reduced time for transcribing a 2-minute audio clip with the base model from ~6.1s to ~5.7s (~6%). With the larger models the benefit will be greater since the caches are larger.