ggerganov / llama.cpp

LLM inference in C/C++
MIT License
68.24k stars 9.79k forks source link

Feature Request: Apply LoRA adapters per-request #10377

Open ngxson opened 6 days ago

ngxson commented 6 days ago

Prerequisites

Feature Description

Server now supports hot-swapping LoRA adapters via /lora-adapters endpoint, which changes the global adapter config.

With this, the only "safe" moment to apply LoRA changes is when all slots are idle.

However, this is not practical in case the server has a high number of requests (ref: https://github.com/ggerganov/llama.cpp/issues/10374). With continuous batching, the chance of all slots become idle is rare.

Motivation

-

Possible Implementation

  1. We can group only requests using the same LoRA config to the same batch
  2. Call common_lora_adapters_apply before processing the batch (remember to clear KV if needed)
michaellin99999 commented 6 days ago

I think there needs to be another way. it is weird to apply LoRa swap when server is idle, the swap is only meaningful when actual users Request it to happen. i.e. summarize this for me, calculate this for me etc.... what causes the need to swap adapters is a instantaneous thing. If you think about it , Its not possible to predict when users need the swap to happen and the better way will to have the swap happen WHEN they need it. This functionality is critical espeically for small models that have to fit to multiple use cases.