langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
93.44k stars 15.04k forks source link

OpenAIAssistantRunnable input validation error #14050

Closed cnevelle-blueprint closed 6 months ago

cnevelle-blueprint commented 10 months ago

System Info

Python 3.11 running locally in PyCharm.

Who can help?

@hwchase17 @agola11

Information

Related Components

Reproduction

def research_tool(input, thread_id=None):

    # Initialize Tavily Search
    search = TavilySearchAPIWrapper()
    tavily_tool = TavilySearchResults(api_wrapper=search)
    salesforce_history_tool = create_pinecone_tool()

    # Initialize PlayWright Web Browser
    sync_browser = create_sync_playwright_browser()
    toolkit = PlayWrightBrowserToolkit.from_browser(sync_browser=sync_browser)

    # Initialize the Toolkit
    tools = toolkit.get_tools()
    tools.append(tavily_tool) # Add Tavily Search to the Toolkit
    tools.append(salesforce_history_tool) # Add Salesforce History to the Toolkit
    tools.extend(sf_tools) # Add Salesforce Tools to the Toolkit

    agent = OpenAIAssistantRunnable.create_assistant(
        name="Research Assistant",
        instructions="You are a personal research assistant on company information",
        tools=tools,
        model="gpt-4-1106-preview",
        as_agent=True,
    )

    agent_executor = AgentExecutor(agent=agent, tools=tools)

    if thread_id: 
        result = agent_executor.invoke({"content": input, "thread_id": thread_id})
    else:
        result = agent_executor.invoke({"content": input})

    output = result['output']
    thread_id = result['thread_id']

    return output, thread_id

Expected behavior

I am looking to have my agent run using the Assistants API. Instead, I receive the following error:

[chain/start] [1:chain:AgentExecutor > 4:chain:OpenAIAssistantRunnable] Entering Chain run with input:
[inputs]
[chain/error] [1:chain:AgentExecutor > 4:chain:OpenAIAssistantRunnable] [315ms] Chain run errored with error:
"BadRequestError(\"Error code: 400 - {'error': {'message': '1 validation error for Request\\\\nbody -> tool_outputs -> 0 -> output\\\\n  str type expected (type=type_error.str)', 'type': 'invalid_request_error', 'param': None, 'code': None}}\")"
dosubot[bot] commented 10 months ago

🤖

Hello @cnevelle-blueprint,

I'm Dosu, a bot designed to assist with debugging, answering questions, and helping users like you become contributors. While we're waiting for a human maintainer to assist you, I'm here to help as best I can. Looking forward to working together to resolve your issue!

Based on the error message you provided, it seems like the issue is related to the tool_outputs field in the request body. The error message indicates that a string type was expected for the output field of the first tool output, but a different type was provided.

In the invoke method of the OpenAIAssistantRunnable class, the tool_outputs field is expected to be a list of dictionaries, where each dictionary represents the output of a tool. Each dictionary should have an output field that is a string.

Here is an example of how the tool_outputs field should be structured:

tool_outputs = [
    {"output": "tool1 output"},
    {"output": "tool2 output"},
    ...
]

In your research_tool function, you are adding multiple tools to the tools list and then passing this list to the AgentExecutor and OpenAIAssistantRunnable. However, it's not clear from the provided code how the outputs of these tools are structured and whether they meet the expected format.

To resolve the issue, you should ensure that the outputs of all tools added to the tools list are strings. If a tool's output is not a string, you should convert it to a string before adding it to the tools list.

If you are still facing the issue, please provide more details about the tools you are using and how their outputs are structured. This will help in identifying the exact cause of the issue and providing a more accurate solution.

For more information about the invoke method and the expected format of the tool_outputs field, you can refer to the OpenAIAssistantRunnable class in the LangChain codebase.

Sources

OlajideOgun commented 9 months ago

facing the same issue