Closed decision-dev closed 8 months ago
@koverholt Can you take a look?
Thanks @decision-dev for the detailed bug report. I tried a few fixes, but I kept running into the same error as you.
Since this repository deals mostly with sample notebooks and content rather than core model issues, I suggest opening a bug report in the Vertex AI issue tracker at: https://issuetracker.google.com/issues/new?component=1130925&template=1637248. When you open an issue, I can also comment in there as able to reproduce and forward to to the engineering team that works on function calling.
@koverholt When I click on the Vertex AI issue tracker link that you sent, I get a login screen asking me to authenticate with a google.com username. Is there a different link for non-google employees?
On Thu, Feb 22, 2024 at 1:51 PM Kristopher Overholt < @.***> wrote:
Thanks @decision-dev https://github.com/decision-dev for the detailed bug report. I tried a few fixes, but I kept running into the same error as you.
Since this repository deals mostly with sample notebooks and content rather than core model issues, I suggest opening a bug report in the Vertex AI issue tracker at: https://b.corp.google.com/issues/new?component=1130925&template=1637248. When you open an issue, I can also comment in there as able to reproduce and forward to to the engineering team that works on function calling.
— Reply to this email directly, view it on GitHub https://github.com/GoogleCloudPlatform/generative-ai/issues/418#issuecomment-1960057096, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASABJOIQPR4P3S2U3FI2TN3YU6HT5AVCNFSM6AAAAABDUGONGWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRQGA2TOMBZGY . You are receiving this because you were mentioned.Message ID: @.***>
Apologies, I used the wrong link in my initial post (which was sent in your email notification). I've edited the post to have the corrected link, which is https://issuetracker.google.com/issues/new?component=1130925&template=1637248.
@koverholt Thank you. I submitted the issue here https://issuetracker.google.com/326497502
This has been fixed (and verified) upstream in https://issuetracker.google.com/326497502. Thanks for reporting!
With which versions this should be working? Continue having the error with what I think are latest versions:
google-cloud-aiplatform 1.43.0 Vertex AI API client library
google-generativeai 0.3.2 Google Generative AI High level API client library and tools.
google-ai-generativelanguage 0.4.0 Google Ai Generativelanguage API client library
@juancalvof, it might be the case that this has not propagated to the cloud region you are using, I tested w/ us-central1
. Could you update https://issuetracker.google.com/326497502 with information about the location, inputs, and outputs you are seeing?
Hello @koverholt! My apologies for not replying sooner. I was diverted to other matters and forgot about this.
The notebook works fine for me. Where I got the error is using Langchain. Langchain makes the call here in line 376
to send_message
to end making the call to --> 435 gapic_response = self._prediction_client.generate_content(request=request)
as the initial log output of this thread/issue.
Debugging I was able to get the tool from the request in this format:
[function_declarations {
name: "query_bla"
description: " Input to this tool is a ....\"\"\"\n "
parameters {
type_: OBJECT
properties {
key: "query"
value {
type_: STRING
description: "bla, bla, bla"
}
}
required: "query"
}
}
]
I have updated the packages to last versions. My logging errors:
File ".venv/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 376, in _generate
response = chat.send_message(
^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 723, in send_message
return self._send_message(
^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 810, in _send_message
response = self._model._generate_content(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py", line 431, in _generate_content
gapic_response = self._prediction_client.generate_content(request=request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/google/cloud/aiplatform_v1beta1/services/prediction_service/client.py", line 2080, in generate_content
response = rpc(
^^^^
File ".venv/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/google/api_core/grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InternalServerError: 500 Internal error encountered.
I am using the same GCP project and location for both the notebook and Langchain use, so I understand it shouldn't be related to that.
Please let me know if you would like me to open a new issue. Thank you for your assistance and time! :)
Now Im super confuse.
My code works with an openAI model and my custom tool. My code works with an Gemini model and a default tool. My code returns the 500 internal error with a Gemini model and my custom tool.
Hi @juancalvof,
Thanks for the info. It might be the case that your FunctionDeclaration is specifying types or a structure that is causing the API to give a 500 error. Hard to say without seeing the full code to reproduce. So that we can look at the specific details of your example, could you open a new issue at https://issuetracker.google.com/issues/new?component=1130925&template=1637248? Thanks!
I'm new to building agent and using llm, i am currently encountering a KeyError: 'agent'
when initializing an AgentExecutor object in my Python script. Here's the traceback:, is there any way i could fix this:
KeyError Traceback (most recent call last)
File [c:\Users\WORK\OneDrive\Documents\drug_order_chatbot\drug_bot.py:1](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:1)
----> [1](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:1) agent_executor = AgentExecutor(agent=agents,
[2](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:2) memory=memory,
[3](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:3) tools=image,
[4](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:4) verbose=True,
[5](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/drug_bot.py:5) return_intermediate_steps=True)
File [c:\Users\WORK\OneDrive\Documents\drug_order_chatbot\venv\lib\site-packages\langchain_core\load\serializable.py:120](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain_core/load/serializable.py:120), in Serializable.__init__(self, **kwargs)
[119](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain_core/load/serializable.py:119) def __init__(self, **kwargs: Any) -> None:
--> [120](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain_core/load/serializable.py:120) super().__init__(**kwargs)
[121](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain_core/load/serializable.py:121) self._lc_kwargs = kwargs
File [c:\Users\WORK\OneDrive\Documents\drug_order_chatbot\venv\lib\site-packages\pydantic\main.py:339](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/pydantic/main.py:339), in pydantic.main.BaseModel.__init__()
File [c:\Users\WORK\OneDrive\Documents\drug_order_chatbot\venv\lib\site-packages\pydantic\main.py:1102](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/pydantic/main.py:1102), in pydantic.main.validate_model()
File [c:\Users\WORK\OneDrive\Documents\drug_order_chatbot\venv\lib\site-packages\langchain\agents\agent.py:980](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:980), in AgentExecutor.validate_tools(cls, values)
[978](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:978) """Validate that tools are compatible with agent."""
[979](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:979) agent = values["agent"]
--> [980](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:980) tools = values["tools"]
[981](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:981) allowed_tools = agent.get_allowed_tools()
[982](file:///C:/Users/WORK/OneDrive/Documents/drug_order_chatbot/venv/lib/site-packages/langchain/agents/agent.py:982) if allowed_tools is not None:
KeyError: 'tools'
see part of my code here:
llm = ChatVertexAI(model_name='gemini-pro',temperature=0)
memory= ConversationBufferMemory(
memory_key='chat_history',output_key='output',return_messages=True)
# /// chat prompt
chat_prompt = ChatPromptTemplate(input_variables=['agent_scratchpad','chat_history','message'],
messages=[
HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=[],
template= (
'''You are a powerful and convincing salesperson '''),
),
),
MessagesPlaceholder(variable_name='chat_history'),
HumanMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=['message'],
template='{message}'
),
),
MessagesPlaceholder(variable_name='agent_scratchpad')
],)
# /// Building converstional agent
chat_bot_with_tools = llm.bind(functions=[image])
agents = (
{
'message': lambda x : x['message'],
'chat_history': lambda x:x['chat_history'],
'agent_scratchpad': lambda x: format_to_openai_function_messages(x['intermediate_steps'])
}
| chat_prompt
| chat_bot_with_tools
| PydanticFunctionsOutputParser(pydantic_schema={
image.name: image.args_schema
})
)
agent_executor = AgentExecutor(agent=agents,
tools=image,
memory=memory,
verbose=True)
@GIDDY269, it might be the case that the versions of packages that you are using such as google-cloud-aiplatform
, langchain
, and related are installed in a combination that is not working well together. It's difficult to say given the information that you've posted, but one thing to try is to ensure that you have the latest versions of those packages, then you can downgrade individual packages to see if you can get to a working state.
Many of the notebooks in this repository have known working versions of those packages pinned to work together. If you continue to have issues, feel free to open an issue at https://issuetracker.google.com/issues/new?component=1130925&template=1637248 so we can take a closer look. Thanks!
Thanks, @koverholt!
Below is the link to the issue along with my latest findings. I believe I've pinpointed the bug.
@juancalvof, thanks so much for opening that issue with details! I'll CC the appropriate engineering folks for Function Calling.
@juancalvof any update on this ?
Hey! I continue having this issue when using https://python.langchain.com/v0.1/docs/modules/model_io/chat/structured_output/ but works with some other tools that some weeks ago were failing. I updated the issue: https://issuetracker.google.com/u/2/issues/331927553
I believe it's just flakey, not that single words can make a difference.
Or, at least, there's a new bug present in 1.5.
I tried changing several things in mine and it'd "work", then I'd run it 3 times total, and it would not work.
Full request: https://pastebin.com/hcbqmR72
Contact Details
dev@chezrothman.com
File Name
gemini/function-calling/intro_function_calling.ipynb
What happened?
I modified the get_location function as follows:
and I changed the prompt to be:
It works if I use an array of strings like this:
So the problem appears to be specifically with an array of objects.
Relevant log output
Code of Conduct