Closed RevanthRameshkumar closed 5 months ago
Hi Revanth,
It works fine for Phi-2, can you see this colab
For the "codellama/CodeLlama-13b-Instruct-hf", I can't experiment right away on colab because of the model size, but can you try something?
grammar = """ start: month " " day
day: /[1-9]/ | /[1-2][0-9]/ | /3[0-1]/
month: "January" | "February" | "March" | "April" | "May" | "June" | "July" | "August" | "September" | "October" | "November" | "December"
"""
model_name = "codellama/CodeLlama-13b-Instruct-hf"
# Load the Syncode augmented model
syn_llm = Syncode(model=model_name, mode='grammar_strict', grammar=grammar, parse_output_only=True)
inp = "When is the christmas day?"
output = syn_llm.infer(inp)
print(f"Syncode augmented LLM output:\n{output}")
I made two changes to the original code.
1) I changed the grammar mode to grammar_strict
2) There is an explicit " " space in the start rule
Because this is an instruct
model it might not work right away with the grammar_mask
mode. grammar_strict
is the stricter mode that we soon want to make the default mode (PR). I checked that this works fine with "codellama/CodeLlama-7b-Instruct-hf"
Hi! I couldn't get it working with Phi-2 either when I ran the code on an A6000 on my gpu box. I'll try the colab example though to make sure. I'll also run the above and let you know the resutls
hey it works on the colab! it must be some difference in env on gpu box vs the colab
Hi there, I tried out the examples in the readme and none of them seem to work as expected (with both the phi model and llama 2).
example:
is the output for