guidance-ai / guidance

A guidance language for controlling large language models.
MIT License
19.19k stars 1.05k forks source link

Not passing the arguments for the tool use. #771

Open Revanth-Badveli opened 7 months ago

Revanth-Badveli commented 7 months ago

The bug

I have two simple functions which I am testing for the tool use case. However I see that the tool are being called without the arguments and hence erroring out.

I could be doing something totally wrong, really appreciate if anyone can provide their insights.

definitions: def get_email_from_name(lm,user_name) def check_login_status(lm, email)

Error: `Traceback (most recent call last): File "/Users/revanthreddy/Repos/function-call/guidance-functioncall.py", line 168, in lm = llm + prompt_with_query + gen(max_tokens=200, tools=[ get_email_from_name, check_login_status], stop="Done.")


  File "/opt/homebrew/lib/python3.12/site-packages/guidance/models/_model.py", line 996, in __add__
    out = value(lm)
          ^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/guidance/_grammar.py", line 69, in __call__
    return self.f(model, *self.args, **self.kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/guidance/library/_gen.py", line 161, in gen
    lm += tools[i].tool_call()
  File "/opt/homebrew/lib/python3.12/site-packages/guidance/models/_model.py", line 996, in __add__
    out = value(lm)
          ^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/guidance/_grammar.py", line 69, in __call__
    return self.f(model, *self.args, **self.kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.12/site-packages/guidance/library/_tool.py", line 60, in basic_tool_call
    lm += callable(*positional, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: get_email_from_name() missing 1 required positional argument: 'user_name'
ggml_metal_free: deallocating`

**To Reproduce**

Below is my python code

```python
import json
import requests
import guidance
from guidance import models, gen

path = "/Users/revanthreddy/Repos/llama.cpp/models/Meta-Llama-3-8B-Instruct.Q8_0.gguf"
llm = models.LlamaCpp(path, n_ctx=4096, verbose=True, n_gpu_layers=15)

def get_email_from_name(lm,user_name):
    if user_name.lower() == "john":
        return lm + "\nObservation: " +  "john.doe@example.com" + "\n"
    elif user_name.lower() == "revanth":
        return lm + "\nObservation: " +   "revanth.reddy@example.com" + "\n"
    else:
        return lm + "\nObservation: User not found" + "\n"

def check_login_status(lm, email):
    logged_in_users = ["john.doe@example.com", "revanth.reddy@example.com"]
    if email in logged_in_users:
        return lm +  "\nObservation: " + f"User with email {email} is currently logged in." + "\n"
    else:
        return lm +  "\nObservation: " + f"User with email {email} is not logged in." + + "\n"

prompt = """Answer the following questions as best you can. You have access only to the following tools:

{tools}

Use the following format:

Question: the input question you must answer
Thought 1: you should always think about what to do
Action 1: the action to take, has to be one of {tool_names}.
Observation 1: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought N: I now know the final answer.
Final Answer: the final answer to the original input question.
Done.

Example:
Question: What is the square root of the age of Brad Pitt?
Thought 1: I should find out how old Brad Pitt is.
Action 1: age(Brad Pitt)
Observation 1: 56
Thought 2: I should find the square root of 56.
Action 2: sqrt(56)
Observation 2: 7.48
Thought 3: I now know the final answer.
Final Answer: 7.48
Done.

Question: {query}
"""

tools = {
    "get_email_from_name" : "Given user's name as input to the function, returns the email id of the user",
    "check_login_status" : "Given user's email id, return the login status of the user"
}

tool_names = list(tools.keys())

query = "Is the user named john is logged in?"
prompt_with_query = prompt.format(tools=tools, tool_names=list(tools.keys()), query=query)

lm = llm + prompt_with_query + gen(max_tokens=200, tools=[ get_email_from_name, check_login_status], stop="Done.")

print(lm)
```

**System info (please complete the following information):**
 - OS (e.g. Ubuntu, Windows 11, Mac OS, etc.): Macos 14.1
 - Guidance Version (`guidance.__version__`): 0.1.13
FoxBuchele commented 7 months ago

Does it behave any differently if you add the @guidance decorator to your tools? Everything else looks fine to me on a first glance.