Open neilzhangpro opened 4 days ago
Hey @neilzhangpro! I'm here to help you out with any bugs, questions, or contributions you have regarding LangChain.js. Let's dive into this issue together!
To set up a createToolCallingAgent
with Gemini in LangChain.js and ensure it can use tools, you need to follow these steps:
Ensure the Prompt is Correct: The prompt must include an input variable named agent_scratchpad
. This is crucial for the agent to function properly.
const prompt = ChatPromptTemplate.fromMessages(
[
["system", "You are a helpful assistant"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]
);
Check the LLM for bindTools
Method: The language model (LLM) should support the bindTools
method. If it doesn't, you will encounter an error. Ensure that the LLM you are using is compatible.
if (_isBaseChatModel(llm)) {
if (llm.bindTools === undefined) {
throw new Error(
`This agent requires that the "bind_tools()" method be implemented on the input model.`
);
}
modelWithTools = llm.bindTools(tools);
} else {
modelWithTools = llm;
}
Define the Tools: Make sure you have defined the tools that the agent will have access to. For example, you can use TavilySearchResults
for search capabilities.
const tools = [new TavilySearchResults({ apiKey: 'your_api_key_here' })];
Create the Agent: Use the createToolCallingAgent
function to create the agent with the LLM, tools, and prompt.
const agent = await createToolCallingAgent({ llm, tools, prompt });
Execute the Agent: Use the AgentExecutor
to invoke the agent with the desired input.
const agentExecutor = new AgentExecutor({ agent, tools });
const result = await agentExecutor.invoke({ input: "How is the weather in Beijing today?" });
Handle Streaming Events: If you want to handle streaming events, you can use the streamEvents
method.
const eventStream = agentExecutor.streamEvents(
{ input: "How is the weather in Beijing today?" },
{ version: "v2" }
);
for await (const event of eventStream) {
const eventType = event.event;
if (eventType === "on_chat_model_stream") {
// Handle the event data
}
}
Additionally, ensure that the TavilySearchResults
tool is correctly configured with an API key, either set as an environment variable TAVILY_API_KEY
or passed directly as a parameter [1][2].
I found similar discussions that might be relevant:
These discussions are unsolved, but they might provide additional insights into similar issues.
To continue talking to Dosu, mention @dosu.
by the way, i used those packages below: @langchain/google-genai 0.1.4 this.googleLLMService.llm is defined like this:
this.llm = new ChatGoogleGenerativeAI({
model: 'gemini-1.5-flash',
temperature: 0,
apiKey: this.configService.get<string>('ALL_IN_ONE_KEY'),
baseUrl: `${this.configService.get<string>('PROXY_URL')}/google`,
});
Hey @neilzhangpro! I'm here to help you out with any bugs, questions, or contributions you have regarding LangChain.js. Let's dive into this issue together!
To set up a
createToolCallingAgent
with Gemini in LangChain.js and ensure it can use tools, you need to follow these steps:
- Ensure the Prompt is Correct: The prompt must include an input variable named
agent_scratchpad
. This is crucial for the agent to function properly.const prompt = ChatPromptTemplate.fromMessages( [ ["system", "You are a helpful assistant"], ["human", "{input}"], ["placeholder", "{agent_scratchpad}"], ] );
- Check the LLM for
bindTools
Method: The language model (LLM) should support thebindTools
method. If it doesn't, you will encounter an error. Ensure that the LLM you are using is compatible.if (_isBaseChatModel(llm)) { if (llm.bindTools === undefined) { throw new Error( `This agent requires that the "bind_tools()" method be implemented on the input model.` ); } modelWithTools = llm.bindTools(tools); } else { modelWithTools = llm; }
- Define the Tools: Make sure you have defined the tools that the agent will have access to. For example, you can use
TavilySearchResults
for search capabilities.const tools = [new TavilySearchResults({ apiKey: 'your_api_key_here' })];
- Create the Agent: Use the
createToolCallingAgent
function to create the agent with the LLM, tools, and prompt.const agent = await createToolCallingAgent({ llm, tools, prompt });
- Execute the Agent: Use the
AgentExecutor
to invoke the agent with the desired input.const agentExecutor = new AgentExecutor({ agent, tools }); const result = await agentExecutor.invoke({ input: "How is the weather in Beijing today?" });
Handle Streaming Events: If you want to handle streaming events, you can use the
streamEvents
method.const eventStream = agentExecutor.streamEvents( { input: "How is the weather in Beijing today?" }, { version: "v2" } ); for await (const event of eventStream) { const eventType = event.event; if (eventType === "on_chat_model_stream") { // Handle the event data } }
Additionally, ensure that the
TavilySearchResults
tool is correctly configured with an API key, either set as an environment variableTAVILY_API_KEY
or passed directly as a parameter [1][2].I found similar discussions that might be relevant:
- Function calling not working in ChatGoogleGenerativeAI with gemini-1.5-pro or gemini-1.5-flash [3].
- Gemini is responding with empty text content as well as tool_call [4].
These discussions are unsolved, but they might provide additional insights into similar issues.
To continue talking to Dosu, mention @dosu.
Its not wrok for me robot! any one lived can help me ?
anyone can help me?
Checked other resources
Example Code
the following code:
Error Message and Stack Trace (if applicable)
The result is :
Description
i have readed the doc:https://js.langchain.com/docs/integrations/chat/google_generativeai#tool-calling There is no instance of agents, so i dont know how to set a agent with gemini and make sure it can use tools?
System Info
langchain 0.3.6 gemini-1.5-flash