guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
19.11k stars 1.04k forks source link

Allow generating more tokens which follow a grammar #702

Open racinmat opened 8 months ago

racinmat commented 8 months ago

Is your feature request related to a problem? Please describe. I am not able to generate more tokens than 1 following context-free-grammar.

Describe the solution you'd like I would like to generate large amount of text following a grammar.

This is my code, how can I generate the whole equation using the context free grammar?

import os
import string

import guidance
from guidance import models, one_or_more, select, zero_or_more, system, regex, gen, Tool

# %% defining the grammar from https://github.com/guidance-ai/guidance?tab=readme-ov-file#context-free-grammars
# stateless=True indicates this function does not depend on LLM generations
@guidance(stateless=True)
def number(lm):
    n = one_or_more(select(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']))
    # Allow for negative or positive numbers
    return lm + select(['-' + n, n])

@guidance(stateless=True)
def variable(lm):
    return lm + select(list(string.ascii_lowercase))

@guidance(stateless=True)
def operator(lm):
    return lm + select(['+', '*', '**', '/', '-', "="])

@guidance(stateless=True)
def expression(lm):
    # Either
    # 1. A number (terminal)
    # 2. two expressions with an operator and optional whitespace
    # 3. An expression with parentheses around it
    return lm + select([
        number(),
        variable(),
        expression() + zero_or_more(' ') + operator() + zero_or_more(' ') + expression(),
        '(' + expression() + ')'
    ])

# %% loading llm
grammar = select([expression(), expression() + regex(' = \d+; ') + expression()])

# sending all layers to gpu
mistral = models.LlamaCpp(
    r"C:\Projects\text-generation-experiments\privateGPT\models\mistral-7b-instruct-v0.2.Q3_K_M.gguf", n_gpu_layers=99)

# command = 'Here is a math expression for two plus two: '
command = 'Here is a math expression for taylor polynomial: '
temp1 = mistral + command
mistral_result = temp1 + grammar
print(mistral_result)
mistral_result2 = mistral_result + gen(max_tokens=30)
print(mistral_result2)

the generation afterwards does not follow the grammar and I don't know how to tell it to follow it.

riedgar-ms commented 8 months ago

If you mean mistral_result2 then it has an unconstrained gen() call; that doesn't have to follow the grammar. Or are you saying that mistral_result itself doesn't follow the grammar?

riedgar-ms commented 8 months ago

Did you mean to do something like:

# command = 'Here is a math expression for two plus two: '
command = 'Here is a math expression for taylor polynomial: '
temp1 = mistral + command
mistral_result = temp1 + grammar + "\n"
print(f"mistral_result={str(mistral_result)}")
mistral_result2 = mistral_result + "Here is another example: " + grammar + "\n"
print(f"mistral_result2={str(mistral_result2)}")

?

racinmat commented 8 months ago

I know the gen is unconstrained, but the + grammar generates only single token, I would like to generate more of them, the whole equation.

riedgar-ms commented 8 months ago

Interesting. When I try with GPT2, I get multiple tokens in the result. With Llama-7B, I am also only getting a single token.