ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.23k stars 9.65k forks source link

Server: Add prompt processing progress endpoint? #6586

Open stduhpf opened 6 months ago

stduhpf commented 6 months ago

Feature Description

It would be nice to have an endpoint on the server example to fetch information about the progress of an ongoing prompt processing It could return something like this:

{
    "processing": [true|false]
    "prompt_length": [number of uncached tokens of the last prompt]
    "remaining": [number of tokens yet to be processed]
}

Motivation

For longer prompts, or when the processing speed is very slow, it would be nice to get a clue about the advencement of the prompt processing. This would possibly also be useful for other projects, not just the server.

Possible Implementation

I haven't yet looked too deep in the current server implementation, so I can't really tell how this would work, but I imagine it would require some deeper changes in the backend too. I did add a simillar feature on a very old project based on an ancient version of llama.cpp, a year ago: https://github.com/stduhpf/fastLLaMa/commit/1ebd5ba79b3a7e4461166fe8683b366ce77a8933 This is now very much outdated, but this feature was nice to have.

phymbert commented 6 months ago

Have you looked at the /slots endpoint? I think it's all you need

stduhpf commented 6 months ago

Have you looked at the /slots endpoint? I think it's all you need

I can't get a response fom server on /slots endpoint during prompt processing. It works during text generation, and reports how many tokens are left to generate, but what I would like to have is that kind of response during prompt processing.

Maybe it's already supposed to be working during prompt processsing, in which case there's probably a bug.

phymbert commented 6 months ago

Maybe it's already supposed to be working during prompt processing, in which case there's probably a bug.

It's not a bug. Prompt processing is blocking the main loop during a batch iteration. You can reduce batch size. We have also in mind to better split concurrent prompt processing in a fair use.

More info in :

stduhpf commented 6 months ago

Ok, so decreasing the batch size allows the server to respond on that endpoint between batches dring prompt processing, but /slots doesn't show report during prompt processing.

phymbert commented 6 months ago

/slots doesn't show report during prompt processing.

Which metrics do you want to see?

stduhpf commented 6 months ago

The current response json contain thes metrics.

[
    {
        "next_token": {
            "n_remain": -1,
            "n_decoded": 0,
            ...
        },
        ...
    }
]

During prompt processing, these stay at their default values of -1 and 0, and during token generation, they both get updated as the tokens get generated, so they add up to the value of n_predict. It would be cool to have something similar, or to re-use them during prompt processing, such as they add up to the number of tokens in the prompt getting processed.

compilade commented 6 months ago

From my understanding of batch processing, this information is not knowable (though it's possible I'm misunderstanding something). During prompt processing, the prompt is split into batches of n_batch tokens (2048 by default), and batches are further split into ubatches of n_ubatch tokens (512 by default), and then each layer is computed (sequentially) over all the tokens (in parallel) in the ubatch, and so the tokens of a ubatch all "finish" processing at the same time in a single forward pass of the compute graph.

But it might still be possible to get an estimate of the progress within a ubatch with some heuristic based on how many nodes in the compute graph have been computed compared to the node count of the graph, though I don't know if that information can be extracted at all and if it can be done reliably for all backends. Maybe there's a way.

But if what you're asking is progress granulated on batch size, that should be easier.

phymbert commented 6 months ago

Maybe the CB eval approach on the server can help also: