guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
18.82k stars 1.04k forks source link

Select function with temperature option #773

Open eliranwong opened 5 months ago

eliranwong commented 5 months ago

Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

I want the select function allows me to set temperature, like the gen function. I am very frustrated that select function returns different answers inconsistently.

Describe the solution you'd like A clear and concise description of what you want to happen.

I want to set temperature 0.0 for select function to always return the same answer.

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.

Additional context Add any other context or screenshots about the feature request here.

eliranwong commented 5 months ago

bty, I am using LlamaCpp as backend.

Harsha-Nori commented 5 months ago

Hi @eliranwong, could you give us an example of where select returns different answers each time? I'd like to look into this more.

eliranwong commented 5 months ago

@Harsha-Nori Thanks for follow-up. Below is an example that I integrate in my project https://github.com/eliranwong/freegenius Experience is painful due to inconsistent answers.

I simplify a code block, for an example below.

from guidance import select, gen

def screening(lm, user_input) -> bool:
    tool = False

    print("```screening")
    thought = "Thought: First, I must carefully distinguish whether the given request is formulated like a greeting, a question, a command, a statement, an issue, a description."
    print(thought)
    lm += f"""<|im_start|>user
Assess the following request and comment whether an additional tool is needed to address it:
<request>{user_input}</request><|im_end|>
<|im_start|>assistant
{thought}
Observation: The given request is formulated like {select(["a question", "a command", "a statement", "an issue", "a description"], name="question")}.
"""
    question = lm.get("question")
    print(f"""Observation: The given request is formulated like {question}.""")
    if question in ("a greeting", "a question", "an issue", "a description"):
        thought = "Thought: Next, I must carefully distinguish whether the requested information is about greeting, common knowledge, math, translation, published content, acquired knowledge, historical records, programming knowledge, religious knowledge, insights obtainable from literature, textbook material, evolving data, recent updates, latest information, current time, current weather, up-to-date news, information specific to your device, or information unknown to me."
        print(thought)
        lm += f"""{thought}
Observation: The requested information is about {select(["greeting", "common knowledge", "math", "translation", "published content", "trained knowledge", "historical records", "programming knowledge", "religious knowledge", "insights obtainable from literature", "textbook content", "evolving data", "recent updates", "latest information", "current time", "current weather", "up-to-date news", "information specific to your device", "information unknown to me"], name="information")}.
"""
        information = lm.get("information")
        print(f"""Observation: The requested information is about {information}.""")
        if information in ("evolving data", "recent updates", "latest information", "current time", "current weather", "up-to-date news", "information specific to your device", "information unknown to me"):
            tool = True
    else:
        thought = "Thought: Next, I must carefully distinguish whether the given request asks for generating a text-response or carrying out a task on your device."
        print(thought)
        lm += f"""{thought}
Observation: The given request asks for {select(["greeting", "calculation", "translation", "writing a text-response", "carrying out a task on your device"], name="action")}.
"""
        action = lm.get("action")
        print(f"""Observation: The given request asks for {action}.""")
        if action in ("carrying out a task on your device",):
            tool = True

    print(f"""Comment: Tool may {"" if tool else "not "}be required.""")
    print("```")

    return tool

def screen_user_request(user_request: str) -> bool:
    lm = models.LlamaCpp(
        "model.gguf",
        echo = False,
    )
    try:
        tool = screening(lm, user_request)
    except:
        tool = True
    lm.reset()
    del lm
    return tool

instructions = (
    "Hi! How are you?",
    "What time is it now?",
    "Open file manager on my device",
    "How to open file manager on Ubuntu",
    "What is the current weather in New York",
    "How many files in my current directories",
    "Who is Adam in the bible?",
    "What is machine learning?",
    "Create a London map.",
    "Send an email to abc@abc.com",
    "Calculate 1 + 1",
    "Translate 'Hi' into Spanish",
    "How many days in a year?",
)

for instruction in instructions:
    for i in range(20):
        screen_user_request(instruction)
eliranwong commented 5 months ago

@Harsha-Nori I tried to simply my code for your testing. Many thanks.

guidance would be very useful if it is able to produce consistent results.

Thanks again for your good work.

Harsha-Nori commented 5 months ago

FYI I'm traveling at the moment and won't be able to get to this until next week, but will take a look then

eliranwong commented 4 months ago

May I ask if any updates?