Open ggerganov opened 10 months ago
I would have expected the compiler to optimize it straight away 🤷🏻
Would an integration of Outlines help? Like they are doing with vLLM: https://github.com/outlines-dev/outlines/issues/163
@ExtReMLapin This copy is used only in the speculative
example. Even if it helps there, it won't have any effect on the general use case. Still, a PR is welcome
@gottlike An efficient low-level solution as the one we currently have seems like a better approach to me.
I noticed that inference gets at some point exponentially slower when there are a lot of deeply nested, but open grammars. With open I mean a lot of different possibilities. As example I am trying to work on PydanticModel -> JsonSchema -> Grammar and when the model outputs a list of nested subobjects this effect comes when the list is long and at some point it gets stuck.
@shroominic on my end it just gets slower the longer it is in printing the json array, no nested objects.
I found similar exponential slowdown as mentioned by @shroominic for my use case, which is to generate code in a language similar to OCaml. The speed of generation was very fast at the first 200 tokens but increased to more than 400 seconds per token as I approach 300 tokens.
I plotted the grammar stack size and duration per token over time and found stack size to be the main factor in the slow down. The number of grammar stacks can go up 800K for my grammar. I'm not very familiar with the grammar sampling algorithm used in llama.cpp but I suspect it's exponential in the length of the parsed string. Polynomially bounded parsing algorithms like the Earley parser might help avoid the exponential blowup.
I found similar exponential slowdown as mentioned by @shroominic for my use case, which is to generate code in a language similar to OCaml. The speed of generation was very fast at the first 200 tokens but increased to more than 400 seconds per token as I approach 300 tokens.
I plotted the grammar stack size and duration per token over time and found stack size to be the main factor in the slow down. The number of grammar stacks can go up 800K for my grammar. I'm not very familiar with the grammar sampling algorithm used in llama.cpp but I suspect it's exponential in the length of the parsed string. Polynomially bounded parsing algorithms like the Earley parser might help avoid the exponential blowup.
After looking into the code, I think there's a seemingly obvious and much more simple way to optimize grammar sampling even without threading.
Right now, it manually checks all token candidates and removes any candidates that would violate the grammar.
It would be much more effective and simple to simply sample the normal way, check if the chosen token violates the grammar before proceeding with it, and if it violates the grammar, it should revert to the current behavior that 'forces' the grammar. Right now, it's 'forcing' the sample set to always match before picking. This is resource intensive for large vocabulary models and is highly unnecessary as the model will naturally adopt the grammar most of the time with typical sampler settings (especially with Min P / low temp), and the new behavior would only need to run the full grammar calculations some of the time.
@ejones Any suggestions for how I would go about implementing a solution?
I'm doing some investigation. I think the easiest way to do this without refactors to the grammar itself is by running a check to the existing grammar function with only the single candidate in sampling.cpp; if it's correct, we proceed. If it's wrong, we restart sampling, this time running:
if (ctx_sampling->grammar != NULL) {
llama_sample_grammar(ctx_main, &cur_p, ctx_sampling->grammar);
}
Before the rep pen or any other modifications are made to the logits. I plan to achieve this by making a copy of the initial logits for the "2nd pass".
There have been a few reports where the grammar sampling can significantly degrade the performance. It would be nice to profile and optimize the implementation - there should be room for improvements.
Already on-going efforts:
reserve
space indecode_utf8
#4210- Allow reusing results from
llama_token_to_piece
when sampling grammars #4213Probably worth looking in multi-threading the implementation as well.
https://github.com/ggerganov/llama.cpp/pull/4306
I have made a pull request which should reduce the number of checks necessary to 1 for most tokens instead of all 32,000 tokens in the vocabulary. I have not evaluated whether or not it is actually faster yet, but I'm guessing that avoiding thousands of UTF8 decoding steps for most tokens would improve performance.
@AlienKevin thanks for investigating! Yeah it's a simple top-down backtracking parser so it can be exponential in the worst case. It works best for grammars with little or no ambiguity or nondeterminism. A deterministic grammar should maintain a constant stack count. This isn't obvious though and we could probably do a better job signaling this to the user.
@kalomaze looks great, commented on your PR as well.
Grammar processing appears to be quite slow (again?): https://github.com/ggerganov/llama.cpp/pull/4306#issuecomment-1947021051
No issue on my end
I've noticed it varies widely with respect to prompt complexity. My JSON schema -> grammar contains three levels of object-arrays and if I ask for a shorter output it completes reasonably quickly with the conforming schema and runs at a consistently high level of CPU utilization.
But if I ask for an output that is about ten times longer, for the exact same schema, I notice the resource utilization (CPU mainly) becomes highly variable and rarely sustains max utilization. The overall inference time gets long enough that it's not worth waiting for the task to complete (30+ minutes) whereas in contrast the exact same prompt will run for about 7 minutes consistently with the grammar/schema removed. If I need to just post a full repro I'm happy to link a Gist.
This issue was closed because it has been inactive for 14 days since being marked as stale.
Just to close the loop on my previous comment-- I continued experimenting with this feature on a wide variety of cases and ultimately concluded that the performance variance is too large for production use, even on fast GPUs such as the A100s.
I would, however, very much like to see this feature perform consistently enough for production as it is otherwise very useful. I am happy to help with testing or reproduction of test cases if anyone decides to work on this.
I've been digging into this lately, and I've been using the integration tests in #6472 to do some crude performance profiling.
I've definitely seen the sort of dramatic stack expansion that @AlienKevin is talking about. I think there are many causes, but one that I've been digging into is how easy it is for alternate grammars to dramatically inflate the stack. For instance, imagine a grammar that says:
root ::= [0-9] ("a"? "a"? "a"? "a"? "a"? [0-9])*
This is a rather extreme example, but hopefully it illustrates the point.
If you have an input string that looks like "1a2a3a4a5", then at the first character, there is only 1 stack.
But it doesn't know if the first 'a' matches the first a?
in the grammar, or one of the later ones -- so it needs to track all 5 possibilities. We now have 7 stacks. Which then grows to 15, 35, 75, etc.
Parsing character 0 ('1'), stack size 1
Parsing character 1 ('a'), stack size 7
Parsing character 2 ('2'), stack size 15
Parsing character 3 ('a'), stack size 35
Parsing character 4 ('3'), stack size 75
Parsing character 5 ('a'), stack size 175
Parsing character 6 ('4'), stack size 375
Parsing character 7 ('a'), stack size 875
Parsing character 8 ('5'), stack size 1875
If we change our input string to "1aa2aa3aa4aa5", then it gets even worse, because it permutates a bit:
Parsing character 0 ('1'), stack size 1
Parsing character 1 ('a'), stack size 7
Parsing character 2 ('a'), stack size 15
Parsing character 3 ('2'), stack size 20
Parsing character 4 ('a'), stack size 70
Parsing character 5 ('a'), stack size 150
Parsing character 6 ('3'), stack size 200
Parsing character 7 ('a'), stack size 700
Parsing character 8 ('a'), stack size 1500
Parsing character 9 ('4'), stack size 2000
Parsing character 10 ('a'), stack size 7000
Parsing character 11 ('a'), stack size 15000
Parsing character 12 ('5'), stack size 20000
On my laptop (Macbook M1, 32gb RAM), this noticeably lags the machine, and it hitches to execute even this (relatively) short grammar and validation string. I've done tests with much larger grammars that push MUCH larger stack sizes, and the ambiguity can really drag things down -- even to the point of memory exhaustion.
Currently I'm tackling some left-recursion issues that can cause segfaults ( #6492 ), but after I get done with that (or give up), then I'm going to tackle these performance issues related to ambiguity. I'm not entirely sure, but I think that there should be a viable algorithm to prune near-redundant stacks if their outcomes would be equivalent. I.E., if we've got four potential "a"'s to match, and only one "a", then it doesn't matter which one we choose -- the others can be discarded once we get to the next token, so long as they all converge onto the same spot in the same rule.
This isn't the only performance-related issue that grammars are seeing, but I believe that these massive exponential growths in the stack size is one of biggest opportunities for optimization.
Anyways, that's where I'm at on things -- will keep y'all posted.
I'm not very familiar with the current setup of our CI performance profilers -- if I were to make improvements to the grammar engine, would those speed improvements show up in our current bank of benchmarks?
if I were to make improvements to the grammar engine, would those speed improvements show up in our current bank of benchmarks?
We don't have benchmarks for this yet. You will have to do some custom profiling to determine how the performance changes. With time, we can attempt to add some sort of speed information about the grammar to the CI or at least some local tools.
how easy it is for alternate grammars to dramatically inflate the stack. For instance, imagine a grammar that says:
root ::= [0-9] ("a"? "a"? "a"? "a"? "a"? [0-9])*
@HanClinto One possibly big source of such explosive repetitions is JSON grammars w/ minItems/maxItems (or w/ JSON string regexp patterns such as {"type": "string", "pattern": "a{3,10}"}
). The easy workaround rn is to rewrite the rule as:
root ::= [0-9] (("a" ("a" ("a" ("a" ("a")?)?)?)?)? [0-9])*
I'm working on fixing the JSON grammar conversion to do this (e.g. https://github.com/ggerganov/llama.cpp/commit/375f85dd573915eb758e4607b156a59e8cd0dbf6), hope to send a PR soon. I'll probably update the GBNF doc w/ performance caveats in the same PR.
@ochafik That indeed is a massive improvement! Testing your grammar against 1a2a3a4a5 gives:
Parsing character 0 ('1'), stack size 1
Parsing character 1 ('a'), stack size 3
Parsing character 2 ('2'), stack size 2
Parsing character 3 ('a'), stack size 3
Parsing character 4 ('3'), stack size 2
Parsing character 5 ('a'), stack size 3
Parsing character 6 ('4'), stack size 2
Parsing character 7 ('a'), stack size 3
Parsing character 8 ('5'), stack size 2
And testing against the worse case of 1aa2aa3aa4aa5 gives:
Parsing character 0 ('1'), stack size 1
Parsing character 1 ('a'), stack size 3
Parsing character 2 ('a'), stack size 2
Parsing character 3 ('2'), stack size 2
Parsing character 4 ('a'), stack size 3
Parsing character 5 ('a'), stack size 2
Parsing character 6 ('3'), stack size 2
Parsing character 7 ('a'), stack size 3
Parsing character 8 ('a'), stack size 2
Parsing character 9 ('4'), stack size 2
Parsing character 10 ('a'), stack size 3
Parsing character 11 ('a'), stack size 2
Parsing character 12 ('5'), stack size 2
Indeed, that's a massive savings. Nice speedup!
It would still be nice to find ways to speed up the grammar tree navigation even with more poorly written grammars, but improving the quality of the grammars in this way is a huge help.
ways to speed up the grammar tree navigation even with more poorly written grammars
@HanClinto I'd be inclined to detect some easily rewritable grammar cases on the fly and explode when the grammar becomes too combinatorial (w/ a link to a "performance tips" section of the GBNF wiki page), either with a cap on stack size or some builtin timeout maybe?
Maybe also some new features like numbered repetition operators "a"{,5}
(desugared as the grammar above) and maybe other regexp derived syntax features (reluctant/eager modifiers?) could make it easier to write efficient grammars.
Fwiw, I've wanted bounded repetitions a few times with this grammar; recursive white space sometimes let's the model spin forever.
It would also be great to see some stats on the grammar as well, either after running or after desugaring so that we can optimize grammars.
@HanClinto I'd be inclined to detect some easily rewritable grammar cases on the fly and explode when the grammar becomes too combinatorial (w/ a link to a "performance tips" section of the GBNF wiki page), either with a cap on stack size or some builtin timeout maybe?
This sounds really good. Detecting and gracefully exiting is the first step. Worst case is that things explode in an infinite loop (as is the case with left-recursion, as in #6492), and implementing a max stack size of 1024 or something should at least give a reasonable approach.
Maybe also some new features like numbered repetition operators
"a"{,5}
(desugared as the grammar above) and maybe other regexp derived syntax features (reluctant/eager modifiers?) could make it easier to write efficient grammars.
Yeah, I think that could be pretty reasonable. I'm also wondering about parse_sequence()
in grammar-parser.cpp
, and this section of rewriting +*?
operators:
// apply transformation to previous symbol (last_sym_start to end) according to
// rewrite rules:
// S* --> S' ::= S S' |
// S+ --> S' ::= S S' | S
// S? --> S' ::= S |
Makes me wonder if there is a better way to do this (more akin to what you wrote above) but I'm still relatively new to grammar engines.
I'm learning a lot about grammar parsing as part of this exercise -- I never read the dragon book, but I'm wondering if I should order myself a copy as part of this work. :)
Fwiw, I've wanted bounded repetitions a few times with this grammar; recursive white space sometimes let's the model spin forever. @o1lo01ol1o Yeah -- poor handling of whitespace seems to be an issue.
Similar to what @ochafik is saying, I wonder if a good first step would be to do an optimization step where -- after expanding grammars -- that adjacent optional tokens are always collapsed into something more efficient. I.E., ws? ws?
being condensed into something like ws{,2}
?
I'm also wondering about parse_sequence() in grammar-parser.cpp, and this section of rewriting +? operators: `// S --> S' ::= S S' |` ... Makes me wonder if there is a better way to do this (more akin to what you wrote above) but I'm still relatively new to grammar engines.
@HanClinto I don't think these rewrites are problematic, as they don't change the number of ways things can be parsed. I'm not sure I understand the grammar decoding logic but it seems it may be keeping the stacks for all the possible ways to decode the current sequence (I'd speculate it's done to avoid backtracking as we don't want to ungenerate tokens Edit: that's so we get the union of all next tokens that may match any of the possible ways of parsing; prime target for acceleration by a quantum computer? 🤪). The a? a? a? a? a?
rule has many possible ways to parse / generate the same thing (e.g. 'aaa' might require 10 stacks, cf. combinations).
That parse_sequence
seems like a good place to implement the bounded repetition operator tho, you could desugar the x{min,max}
syntax w/ code similar to this build_repetitions maybe.
I'm learning a lot about grammar parsing as part of this exercise -- I never read the dragon book, but I'm wondering if I should order myself a copy as part of this work. :)
Haven't read this one but +1000 to learning about parsers and compilers before LLMs completely take that away from us 😅
I've taken a deeper look at how grammar sampling works and besides some possible tactical improvements (e.g. extra reserve calls & reusing stacks vectors combined give over 20% speedup on my test grammars), I'm trying to implement some classical parser optimizations (e.g. precomputing the « head set » of character ranges allowed recursively at the start of each rule alternative, so as to avoid instantiating most of the alternatives; currently we "preheat" stacks for all possible upcoming alternatives before consuming chars, instead we could add only the stacks for alternatives that would accept each char in their head sets). Hope to send PRs in the next couple of weeks if the results are good enough 🤞.
I was able to make a pretty big speed improvement last night in the case of ambiguous alternate grammars. In the case of ambiguous grammars, the stacks are duplicated, and we can prune duplicates in order to trim the stack sizes.
The grammar that I'm using for all of these tests is root ::= [0-9] ("a"? "a"? "a"? "a"? "a"? "a"? "a"? "a"? [0-9])*
This is what the stacks look like in a pretty simple case:
But by adding a culling step to remove duplicates after each time we advance the stacks, we get a much more reasonable stack progression:
And it runs MUCH faster.
Testing against a much larger example of 1aa2aa3aa4aa5aa6
, the stack size grew to 51631104 by the end, and I was afraid that it wasn't going to finish.
After culling optimization:
And it runs near-instantly, as opposed to over a minute for the previous.
This won't help in other cases, but I'm pretty happy with this culling step, and I hope to submit a PR for this improvement soon.
BTW, I spent this evening doing two different experiments for optimizations -- both turned up zero improvement.
The first was attempting to modify reject_candidates to modify the candidates array in-place to reduce allocation of vectors, but that still has a bug in it and I haven't fully tracked it down yet. I'm optimistic this might still bear some fruit at some point.
My second attempt was to memoize llama_grammar_advance_stack, but I didn't see a speed improvement for it. It's possible that it will help more on larger sampling lengths, but I'm not hopeful.
Overall not much fruit for the evening -- we'll see what the rest of the weekend brings.
The first was attempting to modify reject_candidates to modify the candidates array in-place to reduce allocation of vectors, but that still has a bug in it and I haven't fully tracked it down yet. I'm optimistic this might still bear some fruit at some point. My second attempt was to memoize llama_grammar_advance_stack, but I didn't see a speed improvement for it. It's possible that it will help more on larger sampling lengths, but I'm not hopeful.
@HanClinto I've tried an embarrassingly large amount of variants of these ideas too... In the end, good old profiling helped me identify what seem to be two big easy wins → https://github.com/ggerganov/llama.cpp/pull/6811 (makes grammar sampling itself up to 10x faster)
Why not use ANTLR? They have more grammars and the llama.cpp, c grammar is missing a lot.
ANTLR 4: https://github.com/antlr/grammars-v4/blob/master/c/C.g4 llama: https://github.com/ggerganov/llama.cpp/blob/master/grammars/c.gbnf
@liljohnak ANTLR generates parsers that operate on a given string and backtrack when they picked the wrong branch if/when the grammar is ambiguous. In contrast, llama.cpp's grammar constrained sampling (https://github.com/ggerganov/llama.cpp/pull/1773) lets the model pick whichever next token would result in a string partially parsable from the start. To that effect, it keeps all the possible parsing stacks around (they're continuation stacks, their back points to the next char element that can be parsed).
As for their grammars, they can easily be converted to GBNF, for instance, I've let Opus have a go at the C one: c_antlr.gbnf... but it makes llama.cpp to segfault for now (too many stacks? edit probably just left recursion (https://github.com/ggerganov/llama.cpp/issues/6492); new frontier to conquer!).
As a user of the json_schema
feature in server
, I would very much love faster grammar support :D
I'm building out an agent flow, where I call the LLM an extra time before each user message to extract any function/tool call, detect the intent, etc, which is done using a json schema like { "function_name": "...", "function_invocation": "...", "user_intent": "..."}
etc.
And this extra inference step takes up to 5sec on my 3090, even though it's hardly generating any tokens (in fact, I limit it to 20), presumably because of the grammar. After the function calling stage, the actual agent response from the LLM is very speedy, usually sub-1-second depending on length. So it's taking 5x longer to generate only a few tokens for function calling, compared to actually writing out a long response message.
I previously used TabbyAPI for this, and it handled the grammar extremely fast - sub-200ms usually, compared to 5sec in llama.cpp server
.
@andysalerno Would you be able to share a real-world grammar to test one of the fixes that are in flight? (e.g. https://github.com/ggerganov/llama.cpp/pull/7424 )
sure, I am using the json_schema
property on a request to server
at /v1/chat/completions
with this schema:
{
"title": "AnswerFormat",
"type": "object",
"properties": {
"last_user_message_intent": {
"type": "string"
},
"function_name": {
"type": "string"
},
"invocation": {
"type": "string"
}
},
"required": [
"last_user_message_intent",
"function_name",
"invocation"
]
}
I can respond shortly with some measurements showing how long this takes compared to a non-grammar generation with similar length.
here is the verbose timings
output from the request using the grammar:
"timings": {
"prompt_n": 592,
"prompt_ms": 708.655,
"prompt_per_token_ms": 1.1970523648648648,
"prompt_per_second": 835.3853426561585,
"predicted_n": 37,
"predicted_ms": 4103.24,
"predicted_per_token_ms": 110.89837837837837,
"predicted_per_second": 9.017264405689163
}
note how it took 4sec to predict 37 tokens, and the predicted_per_token_ms
of 110.89
, and the predicted_per_second
of 9
.
and here's the timing
for the very next request which does not use a grammar
"timings": {
"prompt_n": 140,
"prompt_ms": 280.92,
"prompt_per_token_ms": 2.0065714285714287,
"prompt_per_second": 498.3625231382599,
"predicted_n": 11,
"predicted_ms": 292.259,
"predicted_per_token_ms": 26.569000000000003,
"predicted_per_second": 37.637848620572846
}
Note it only took ~300ms for 11 tokens, with a predicted_per_token_ms
of 26.6
and a predicted_per_second
of 37.6
.
@andysalerno Thanks! Added your grammar to these benchmarks, it's one that seems will benefit from both https://github.com/ggerganov/llama.cpp/pull/7424 (grammar-resampling
) & https://github.com/ggerganov/llama.cpp/pull/6811 (grammar-fast
)
Example of slow grammar :
root ::= dateforced | string
dateforced ::= "\"" "Date lol" "\""
string ::= EntityTypeNonDate
EntityTypeNonDate ::= "\"" ( [^D\x00-\x40\U0000005B-\UFFFFFFFF] | "D" [^a\x00-\x60\U0000007B-\UFFFFFFFF] | "Da" [^t\x00-\x60\U0000007B-\UFFFFFFFF] | "Dat" [^e\x00-\x60\U0000007B-\UFFFFFFFF]) ASCIIEntityNameContinue{0,15} "\""
ASCIICharLower ::= [a-z]
ASCIICharUpper ::= [A-Z]
ASCIIEntityName ::= ASCIIWordFirst (ASCIIWordNext){0,3}
ASCIIEntityNameContinue ::= (ASCIIWordNext){0,3}
ASCIIWordFirst ::= ASCIICharUpper ASCIICharLower{2,20}
ASCIIWordNext ::= ("-"|" ")? ASCIICharUpper? ASCIICharLower{2,20}
./llama-cli -m mistral-7b-instruct-v0.2.Q5_K_M.gguf -ngl 999999 --seed 0 --temp 0 -p "[INST]Who are you ? answer with quotes[/INST]" -n 512 --grammar-file ./grammar_fallback.gbnf
There have been a few reports where the grammar sampling can significantly degrade the performance. It would be nice to profile and optimize the implementation - there should be room for improvements.
Already on-going efforts:
4210
4213
Probably worth looking in multi-threading the implementation as well.