Open teoremma opened 1 month ago
At this point, there is no way to skip it. But I can implement it soon. (probably by the end of this week)
Hi @teoremma,
Removing DFA mask store dependency is causing inference to be bit too slower than what I expected, and might not work out as easily. I'll spend a little more time on this to see if I could make it work this week.
Slightly, longer term, I think if I could build this DFA mask store incrementally where it reuses cache for previous grammar rules - this could be a much better solution. I'll spend some time exploring that as well.
Either way, I'll update you on this soon.
I see, that's understandable. Building the DFA incrementally would also help to make experimentation faster.
Thanks a lot, I'll keep an eye on.
Currently, when trying to run constrained decoding with a new grammar, we are prompted with
This holds true even for the smallest grammar examples.
Sadly, this makes experimenting and debugging grammars quite cumbersome, because any modification will result in a cache miss and will trigger this expensive preprocessing. Additionally, we are currently working in a setup where the grammar is modified on the fly in between interactions with the LLM, and so 10 minutes is prohibitively expensive.
Although the decoding performance might be affected, is there a way to skip this preprocessing?