ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.58k stars 9.25k forks source link

llama : combined beam search + grammar sampling strategy #2923

Open ggerganov opened 1 year ago

ggerganov commented 1 year ago

This feature was proposed by @spion in https://github.com/ggerganov/llama.cpp/issues/2813#issuecomment-1694390583

In some cases, its useful to do constrained evaluation of logits based on a union of possible text values, then pick the sum { logits } (i.e. product(probabilities)) that gives the most probable outcome overall.

E.g. template (using MS guidance)

{{#select 'armor'}}leather{{or}}chainmail{{or}}plate{{/select}}

To definitely make the best choice, we'd need to calculate the probability of all 3 token sequences. Its easy if all the choices map to a single token, but with multiple tokens we'd need not just parallel generation but parallel logit evaluation of multiple possible paths.

If we go greedy, we might get suboptimal results in cases multiple choices start with the same logit.

It should be possible to implement this by combining the existing beam search and grammar sampling features. See the discussion in the referenced comment for more info

nhhung1810 commented 1 year ago

Hi team, have some small experience with beam search and I think that I can help, can I work on this PR @ggerganov

ggerganov commented 1 year ago

Sure, let us know if you make any progress and make sure to check the comments in the referenced issue

nhhung1810 commented 1 year ago

Sure @ggerganov , beside that, is there anything I have to notice? I'm using an Apple Silicon for development

ggerganov commented 1 year ago

Nothing specific comes to mind. I'll recommend writing a separate C++ example, similar to #2926 with extensive logging so we can debug what is being generated. If you open a draft PR, we can give recommendations during the development, but you don't have to if you prefer it that way. Also, get familiar with the existing beam-search example and the grammar-related sampling options

nhhung1810 commented 1 year ago

Hi @ggerganov, to be honest, it's quite hard to start config and debug the project. Can we contact on some channel to discuss about how to start? If it does not existed, it'd like to write that document down to, so it will benefit new contributors.

FYI, I know how to code C++, but not many experience on building and shipping C++ project, maybe that's also an issue, too.

kiratp commented 12 months ago

@ggerganov a bunch of these cool thee toys (speculative exec, beam search) seem to be landing in either main or separate executables in examples.

Do you intend to push for some consolidation of this functionality all into main at some point?

viantirreau commented 10 months ago

Hi! What do you think I should do (implementation wise) to speed up grammar-constrained generation? I am currently generating a specific JSON object with fixed keys using Mistral, and was wondering if I could somehow use the model to only predict the unknown values, based on some prompt+input, exactly like MS guidance. In my specific use case, this would approximately halve the amount of tokens that need to be generated.

I was thinking about looking ahead one character at a time and, as long as there is exactly one option, accept that character and continue forward. This is to say, give the control back to the model as soon as we need to branch (which tends to happen when filling the values of this specific JSON).

I didn't feel like opening another issue, as this one seemed closely related. Also, this has already been tangentially discussed in e.g. https://github.com/ggerganov/llama.cpp/pull/1773#issuecomment-1585805739 (Tobias Lütke)

I wonder if there is an optimization where the next token is already known form the grammar we could skip the inference and just add it? In many types of grammars like json or html that could really speed up generation

The thing with "the next token is already known" is that some tokens share prefixes, so many of them could be valid simultaneously under some grammar, thus I think it would be better to iterate one character at a time until there's a branching, and just then tokenize and batch decode.

Thanks in advance for any thoughts or suggestions!

ggerganov commented 10 months ago

One simple workaround is to use the speculative example with a very tiny draft model (< 1B) and constrain it with grammar. This is not optimal, but if the tiny model drafting is negligible it will serve as a way to "queue" the deterministic tokens from the grammar.

ExtReMLapin commented 1 month ago

Any news on this ? Also what @viantirreau suggested would be top notch.