uiuc-focal-lab / syncode

Efficient and general syntactical decoding for Large Language Models
MIT License
181 stars 10 forks source link

Skip or speedup lexer preprocessing #115

Open teoremma opened 4 days ago

teoremma commented 4 days ago

Currently, when trying to run constrained decoding with a new grammar, we are prompted with

Creating DFA mask store for LlamaTokenizerFast and custom, may take more than 10 minutes.

This holds true even for the smallest grammar examples.

Sadly, this makes experimenting and debugging grammars quite cumbersome, because any modification will result in a cache miss and will trigger this expensive preprocessing. Additionally, we are currently working in a setup where the grammar is modified on the fly in between interactions with the LLM, and so 10 minutes is prohibitively expensive.

Although the decoding performance might be affected, is there a way to skip this preprocessing?

shubhamugare commented 4 days ago

At this point, there is no way to skip it. But I can implement it soon. (probably by the end of this week)