phidatahq / phidata

Build AI Assistants with memory, knowledge and tools.
https://docs.phidata.com
Mozilla Public License 2.0
10.45k stars 1.51k forks source link

the way to specify a function is blurry #991

Open ju1987yetchung opened 1 month ago

ju1987yetchung commented 1 month ago

image

According the documentation, I wrote my code as below:

`from phi.assistant import Assistant from phi.llm.ollama import Ollama

def showcase(): """ if let use showcase, you should use showcase :return: """ a=1 print(a) print("+++++++++") b=2 print(b) print("+++++++++") print("hello, if you see me, it means function call is under implementation")

assistant = Assistant( llm=Ollama(model="llama3:8b"),

description="you are an assistant that can use function tools to reply",

#instructions=["when people say something you need choose right function to solve problem or reply"],

# Add functions or Toolkits
tools=[showcase],
tool_choice="auto",
use_tools = True,
# Show tool calls in LLM response.
show_tool_calls=True,

debug_mode=False,

)

assistant.print_response("Could you use showcase to let me check whether it is useful, thank you!") # , markdown=True)`

but when I run this code, there is 50% chance that code runing in healthy way, but another 50% chance that return error like below:

image

I think it is because when I run the code, 50% chance the assistant choose not run the tool but reply directly. So, I want force the assistant to use the tool. and changed the parameter: tool_choice={"type": "function","function": {"name": "showcase"}}

I don't know whether I wrote in the right way, but after runing, there is still old problem. did I wrote "tool_choice" in wrong format. What the right format be?