microsoft / promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
https://microsoft.github.io/promptflow/
MIT License
8.32k stars 712 forks source link

[BUG] when running Langchain AgentExecutor, TypeError occurs - 'generator' object does not support the context manager protocol #3079

Closed hyeonje-cho closed 2 days ago

hyeonje-cho commented 2 weeks ago

Describe the bug When running the langchain AgentExecutor in a Python tool, a TypeError occurs. The function runs fine in environments without the Python tool decorator

How To Reproduce the bug Here is my code.

from promptflow.core import tool
from dotenv import load_dotenv, find_dotenv
from langchain_openai import AzureChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun
from promptflow.connections import AzureOpenAIConnection
from langchain import hub

@tool
def my_python_tool(question: str, openai_connect: AzureOpenAIConnection) -> str:
    load_dotenv(find_dotenv(), override=True)
    llm = AzureChatOpenAI(
        azure_deployment="gpt-35-turbo-16k",  # gpt-35-turbo-16k or gpt-4-32k
        openai_api_key=openai_connect.api_key,
        azure_endpoint=openai_connect.api_base,
        openai_api_type=openai_connect.api_type,
        openai_api_version=openai_connect.api_version,
    )
    search = DuckDuckGoSearchRun()
    tools = [search]
    prompt = hub.pull("hwchase17/openai-tools-agent")
    agent = create_tool_calling_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
    result = agent_executor.invoke({"input": question})

    return result["output"]

Expected behavior Return agent_executor's output

Screenshots

image

Running Information(please complete the following information):

Additional context Add any other context about the problem here.

guming-learning commented 1 week ago

Root cause: langchain-openai wrap stream code in context manager block in this PR, which is released in langchain-openai 0.1.2 on Apr 10, 2024. Which introduces such change:

with self.client.create(messages=message_dicts, **params) as response:
    ...

However, promptflow wraps the generator output of OpenAI api for tracing, and the wrapped generator does not implement __enter__ and __exit__ function, thus causes the error.

We will change the wrapper to align with the original context manager behavior.