- [ ] bug report -> please search issues before submitting
- [x] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)
Minimal steps to reproduce
from operator import itemgetter
from typing import Dict, List, Union
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_core.tools import tool
from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import AIMessage
from langchain_core.runnables import (
Runnable,
RunnableLambda,
RunnableMap,
RunnablePassthrough,
)
import os
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
llm = AzureChatOpenAI(config)
tools = [multiply, exponentiate, add]
llm_with_tools = llm.bind_tools(tools)
tool_map = {tool.name: tool for tool in tools}
def call_tools(msg: AIMessage) -> Runnable:
tool_map = {tool.name: tool for tool in tools}
tool_calls = msg.tool_calls.copy()
for tool_call in tool_calls:
tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"])
return tool_calls
chain = llm_with_tools | call_tools
input_text = "What's 23 times 7, and what's five times 18 and add a million plus a billion and cube thirty-seven"
result = chain.invoke(input_text)
Any log messages given by the failure
Expected/desired behavior
Having multiple tools called (parallel calling)
OS and Version?
Linux
Versions
Mention any other details that might be useful
I was wondering if Azure did support langchain AzureChatOpenAI to handle parallel tool calling. Even when specifying the model name, the chat_completions always uses gpt-4-32k for some reason. When I use directly ChatOpenAI, parallel tool calling works.
Anyone having this issue?
This issue is for a: (mark with an
x
)Minimal steps to reproduce
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent from langchain_core.tools import tool
from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings, ChatOpenAI
from langchain_core.messages import AIMessage from langchain_core.runnables import ( Runnable, RunnableLambda, RunnableMap, RunnablePassthrough, ) import os
llm = getllm(0.7)
@tool def multiply(first_int: int, second_int: int) -> int: """Multiply two integers together.""" return first_int * second_int
@tool def add(first_int: int, second_int: int) -> int: "Add two integers." return first_int + second_int
@tool def exponentiate(base: int, exponent: int) -> int: "Exponentiate the base to the exponent power." return base**exponent
llm = AzureChatOpenAI(config)
tools = [multiply, exponentiate, add] llm_with_tools = llm.bind_tools(tools) tool_map = {tool.name: tool for tool in tools}
def call_tools(msg: AIMessage) -> Runnable: tool_map = {tool.name: tool for tool in tools} tool_calls = msg.tool_calls.copy() for tool_call in tool_calls: tool_call["output"] = tool_map[tool_call["name"]].invoke(tool_call["args"]) return tool_calls
chain = llm_with_tools | call_tools
input_text = "What's 23 times 7, and what's five times 18 and add a million plus a billion and cube thirty-seven"
result = chain.invoke(input_text)
Any log messages given by the failure
Expected/desired behavior
Versions
Mention any other details that might be useful
I was wondering if Azure did support langchain AzureChatOpenAI to handle parallel tool calling. Even when specifying the model name, the chat_completions always uses gpt-4-32k for some reason. When I use directly ChatOpenAI, parallel tool calling works. Anyone having this issue?