Open racinmat opened 8 months ago
If you mean mistral_result2
then it has an unconstrained gen()
call; that doesn't have to follow the grammar. Or are you saying that mistral_result
itself doesn't follow the grammar?
Did you mean to do something like:
# command = 'Here is a math expression for two plus two: '
command = 'Here is a math expression for taylor polynomial: '
temp1 = mistral + command
mistral_result = temp1 + grammar + "\n"
print(f"mistral_result={str(mistral_result)}")
mistral_result2 = mistral_result + "Here is another example: " + grammar + "\n"
print(f"mistral_result2={str(mistral_result2)}")
?
I know the gen
is unconstrained, but the + grammar
generates only single token, I would like to generate more of them, the whole equation.
Interesting. When I try with GPT2, I get multiple tokens in the result. With Llama-7B, I am also only getting a single token.
Is your feature request related to a problem? Please describe. I am not able to generate more tokens than 1 following context-free-grammar.
Describe the solution you'd like I would like to generate large amount of text following a grammar.
This is my code, how can I generate the whole equation using the context free grammar?
the generation afterwards does not follow the grammar and I don't know how to tell it to follow it.