Open weedge opened 1 month ago
Hi @weedge , The issue is that sometimes the model fails to utilize the provided tools for a given query. You can force the model to use the tools. Please refer the example below:
from google.generativeai.types import content_types
from collections.abc import Iterable
def tool_config_from_mode(mode: str, fns: Iterable[str] = ()):
"""Create a tool config with the specified function calling mode."""
return content_types.to_tool_config(
{"function_calling_config": {"mode": mode, "allowed_function_names": fns}}
)
def run_auto_function_calling():
"""
Function calls naturally fit in to [multi-turn chats](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#multi-turn) as they capture a back and forth interaction between the user and model. The Python SDK's [`ChatSession`](https://ai.google.dev/api/python/google/generativeai/ChatSession) is a great interface for chats because handles the conversation history for you, and using the parameter `enable_automatic_function_calling` simplifies function calling even further
"""
model = genai.GenerativeModel(
model_name="gemini-1.5-flash-latest",
tools=[
add,
subtract,
multiply,
divide],
system_instruction="You are a helpful assistant who converses with a user and answers questions. Respond concisely to general questions. ",
)
fxn_tools=["add", "subtract", "multiply", "divide"]
tool_config = tool_config_from_mode("any", fxn_tools)
chat = model.start_chat(enable_automatic_function_calling=True)
response = chat.send_message(
[
"what's your name?",
"I have 57 cats, each owns 44 mittens, how many mittens is that in total?",
],
tool_config=tool_config
# stream=True, # enable_automatic_function_calling=True, unsupport stream
)
#print(f"run_auto_function_calling response: {response}")
for content in chat.history:
print(content.role, "->", [type(part).to_dict(part) for part in content.parts])
print("-" * 80)
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
At some point the models got much better at doing arithmetic themselves (or they have their own calculator), so maybe that's why they're skipping this function call now.
We should change this example code to use the "turn on the lights" example, since code-execution is also a better way of doing this.
Description of the bug:
Actual vs expected behavior:
result:
open
"what's your name?",
result:no function_call history
Any other information you'd like to share?
pip show google-generativeai