Closed vecorro closed 2 months ago
(I am not a developer) It looks like a problem in a downstream project. I suggest opening a issue there: https://github.com/abetlen/llama-cpp-python
Look at the older versions of the json.gnbf file, find the one that ggerganov made, and use that.
Just ran into this same problem, the older file works, the compiler doesn't seem to like the {4}
part of it. I also reverted to the latest before this change
Same problem here *, ?, + are working for repetition but not with curly brackets like {0,5}, {4}, {1,16} or so
Ok after some quick debugging it seems like to be a problem with the llama-cpp-python library. They translated the parsing logic into python code and this code doesn't support repetition with curly brackets Reference: https://github.com/abetlen/llama-cpp-python/blob/01bddd669ca1208f1844ce8d0ba9872532641c9d/llama_cpp/llama_grammar.py#L837
Also tested my grammar file with the llama.cpp CLI file and I works like expected
Library Issue Reference: https://github.com/abetlen/llama-cpp-python/issues/1547
Just ran into this same problem, the older file works, the compiler doesn't seem to like the
{4}
part of it. I also reverted to the latest before this change
Support for discrete repetition operators was only added about 3 weeks ago in #6640 -- so I'm curious to know where exactly the mismatch is at
Has this issue been solved? I'd really like to pass my grammar file as an argument to the api request, is there a specific way to format it?
This issue was closed because it has been inactive for 14 days since being marked as stale.
What happened?
I have tried to load the
json.gbnf
grammar example but haven't been able to do so. The following code is not working.This throws the following error:
ValueError: from_string: error parsing grammar file: parsed_grammar.rules is empty
I'm not sure if the problem resides in the grammar definition file or in the LlamaGrammar class. The problem shows up when I use the
.from_file
method as well.Name and Version
Ubuntu 22.04 Python 3.11 (Anaconda) llama_cpp_python 0.2.78
What operating system are you seeing the problem on?
Linux
Relevant log output