Closed slee-pasonatquila closed 8 months ago
what are your libraries versions?
I tried your snippet, it works fine on my side:
from langgraph.prebuilt import ToolExecutor
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.utils.function_calling import convert_to_openai_tool
tool_executor = ToolExecutor(tools)
functions = [convert_to_openai_tool(t)["function"] for t in tools]
model = ChatGoogleGenerativeAI(model='gemini-pro', temperature=0, google_api_key=api_key)
model_with_tools = model.bind(functions=functions)
message = "What's four times 23"
result = model_with_tools.invoke(message)
check = hasattr(result, 'additional_kwargs')
from langgraph.prebuilt import ToolInvocation
import json
action = ToolInvocation(
tool=result.additional_kwargs["function_call"]["name"],
tool_input=json.loads(
result.additional_kwargs["function_call"]["arguments"]),
)
response = tool_executor.invoke(action)
print(response)
return 92
@lkuligin Thanks for reply. The full code is as below.
import langchain
import json
import os
from langchain_core.tools import tool
from langchain_google_genai import ChatGoogleGenerativeAI
from langgraph.prebuilt import ToolInvocation
from langchain_core.utils.function_calling import convert_to_openai_tool
from langgraph.prebuilt import ToolExecutor
os.environ['GOOGLE_API_KEY'] = 'API KEY'
langchain.debug = True
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
tools = [multiply, add, exponentiate]
tool_executor = ToolExecutor(tools)
functions = [convert_to_openai_tool(t)["function"] for t in tools]
model = ChatGoogleGenerativeAI(model='gemini-pro', temperature=0)
model_with_tools = model.bind(functions=functions)
message = "What's four times 23"
result = model_with_tools.invoke(message)
check = hasattr(result, 'additional_kwargs')
try:
action = ToolInvocation(
tool=result.additional_kwargs["function_call"]["name"],
tool_input=json.loads(
result.additional_kwargs["function_call"]["arguments"]),
)
response = tool_executor.invoke(action)
print(response)
except KeyError:
result = model.invoke(message)
print(result.content)
And the version of langchain is as below.
langchain 0.1.9
langchain-community 0.0.24
langchain-core 0.1.27
langchain-experimental 0.0.52
langchain-google-genai 0.0.9
langchain-google-vertexai 0.1.0
langchainhub 0.1.14
langdetect 1.0.9
langgraph 0.0.24
langsmith 0.1.10
When I run the code, I got the error as below.
Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 2.0 seconds as it raised InternalServerError: 500 An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting.
And When i changed the source to this, it is works fine.
glm.Tool(
function_declarations=[_convert_to_genai_function(fc) for fc in function_calls],
)
Please, try to downgrade langchain-google-genai to 0.0.7 for now, and it should work. And we'll need to figure out what's wrong in the meantime.
@slee-pasonatquila updating underlying dependency of google-generativeai
to 0.4.0 seems to help, could you try that, please?
@slee-pasonatquila updating underlying dependency of
google-generativeai
to 0.4.0 seems to help, could you try that, please?
@lkuligin Thanks. That works for me.
When I run the code below, always got server 500 error.
I think the code below https://github.com/langchain-ai/langchain-google/blob/6af9e3ba2bcc1f5002aa514d99529fa44876a523/libs/genai/langchain_google_genai/_function_utils.py#L31-L34 should be changed to
eg.