Closed expresspotato closed 4 months ago
Hi @expresspotato:
Thanks for the report! Unfortunately I was not able to reproduce this, I'm using:
When I set the ToolConfig
mode to NONE
as in:
from vertexai.generative_models import (
FunctionDeclaration,
GenerativeModel,
Tool,
)
from vertexai.preview.generative_models import ToolConfig
get_product_info = FunctionDeclaration(
name="get_product_info",
description="Get the stock amount and identifier for a given product",
parameters={
"type": "object",
"properties": {
"product_name": {"type": "string", "description": "Product name"}
},
},
)
tool = Tool(
function_declarations=[
get_product_info,
],
)
tool_config = ToolConfig(
function_calling_config=ToolConfig.FunctionCallingConfig(
mode=ToolConfig.FunctionCallingConfig.Mode.NONE,
allowed_function_names=[],
)
)
model = GenerativeModel("gemini-1.5-pro-preview-0409")
response = model.generate_content(
"Do you have the Pixel 8 Pro in stock?",
tools=[tool],
tool_config=tool_config,
)
response.text
I get this expected response (as if no tools are defined):
'I do not have access to real-time information, including inventory. To find out if the Pixel 8 Pro is in stock, I recommend checking with a retailer like Google Store or other electronics stores.
When I set the ToolConfig
mode to AUTO
as in:
from vertexai.generative_models import (
FunctionDeclaration,
GenerativeModel,
Tool,
)
from vertexai.preview.generative_models import ToolConfig
get_product_info = FunctionDeclaration(
name="get_product_info",
description="Get the stock amount and identifier for a given product",
parameters={
"type": "object",
"properties": {
"product_name": {"type": "string", "description": "Product name"}
},
},
)
tool = Tool(
function_declarations=[
get_product_info,
],
)
tool_config = ToolConfig(
function_calling_config=ToolConfig.FunctionCallingConfig(
mode=ToolConfig.FunctionCallingConfig.Mode.AUTO,
allowed_function_names=[],
)
)
model = GenerativeModel("gemini-1.5-pro-preview-0409")
response = model.generate_content(
"Do you have the Pixel 8 Pro in stock?",
tools=[tool],
tool_config=tool_config,
)
response.candidates[0].function_calls
I get this expected function call response:
[name: "get_product_info"
args {
fields {
key: "product_name"
value {
string_value: "Pixel 8 Pro"
}
}
}]
If you continue to see this behavior, please open a bug report in the public Vertex AI issue tracker at:
https://issuetracker.google.com/issues/new?component=1130925&template=1637248
And include a full reproducible code sample there along with versions that you're using. That way we can get this issue to the right product and engineering teams to take a deeper look. Thanks!
I think this response missed the point. I am having the same issue. It is not that the tools aren't used to solve for prompts that directly use the tools, it is that when using tools it is unable to answer generalized queries. For example, in my case, if I provide any tools (even a single simple tool function for currency conversion), Gemini (1.5 via API) is no longer able to answer general questions like:
"What time zone is Seattle, WA in" Without tools: "Seattle, Washington is in the Pacific Standard Time (PST) time zone. " With tools: "I am sorry, I cannot answer that question. I do not have access to timezone information. "
"How far is it from Seattle to Portland"
Without tools: "The driving distance between Seattle, Washington and Portland, Oregon is approximately 172 miles (277 kilometers)."
With tools: "I am sorry, I cannot answer this question. I do not have access to any geographical information or distance calculation tools."
Its usage of the tools is also very poor (I gave it a function to get the GMT time, and one to get the current location city and state, and then asked it for the current local time, and if failed hard with all of many different prompts, but I haven't bothered looking into that after seeing that it loses 100 IQ points when you pass it any tools, so it makes sense it also can't apply any information to the tool output or use the tools in reasoning, since it doesn't seem to be able to access any factual information when tools are in use).
Is the Google issue tracker really the right place for this kind of thing? I didn't see much in this area there (even generally using Vertex AI via API).
Understood, thanks for clarifying! Yes, some of the behavior of how the Gemini API handles tool use or natural language responses has changed in Gemini 1.0 Pro and Gemini 1.5 Pro/Flash with the introduction of forced function calling and parallel function calling. And I agree there has been some friction in having distinct function calling modes vs. direct API calls to Gemini. In fact, I have implemented and seen implemented a fallback tool that makes a general generate content query to Gemini in the event that none of the registered tools are applicable, which is a fragile pattern. Rather, I would also love to see more robust fallbacks to Gemini when there is no applicable tool.
The reason for pushing to the Google issue tracker is that there are GitHub issues like this in this repo and the Vertex AI Python SDK repo, which are not as focused on the core Gemini behavior that you mentioned. Whereas the Google Vertex AI issue tracker is a more standard way for us DevRellers to get it in front of the right product and engineering folks. So thanks for filing https://issuetracker.google.com/issues/358913898 ❤️ , I will CC some of the relevant function calling folks and pass some additional context in that issue.
File Name
None
What happened?
Hello,
Looks like Gemini 1.5 Pro tool use is completely broken, in the fact it gives some very strange answers. Consider the following code example:
Response:
Pretty strange, seems like the tool config has no effect. Instead, we see this same response with a mode of
.AUTO
or noToolConfig
at all.Is function calling and generalized queries completely broken in Gemini or am I missing something?
Relevant log output
No response
Code of Conduct