open-telemetry / opentelemetry-python-contrib

OpenTelemetry instrumentation for Python modules
https://opentelemetry.io
Apache License 2.0
701 stars 584 forks source link

TypeError: unsupported operand type(s) for +=: 'async_generator_asend' and 'async_generator_asend' #2579

Closed audols closed 2 months ago

audols commented 3 months ago

Describe your environment Describe any aspect of your environment relevant to the problem, including your Python version, platform, version numbers of installed dependencies, information about your cloud hosting provider, etc. If you're reporting a problem with a specific version of a library in this repo, please check whether the problem has been fixed on main.

Have tried both Python 3.11 and 3.12. Using opentelemetry-instrumentation==0.46b0 and both langchain==0.2.1 and langchain==0.2.2.

Steps to reproduce Describe exactly how to reproduce the error. Include a code sample if applicable.

Implement and make a call to any LangChain AgentExecutor, e.g. in the following LangServe context:

agent = create_tool_calling_agent(model, tools, agent_prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
)

add_routes(
    app,
    agent_executor.with_types(input_type=Input, output_type=Output),
    path="/chat",
)

What is the expected behavior? No error - AgentExecutor chain should complete without issue, traces should be available at destination specified in instrumentation.

What is the actual behavior? What did you see instead?

chat-app-langchain  |   File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 560, in aplan
chat-app-langchain  |     final_output += chunk
chat-app-langchain  | TypeError: unsupported operand type(s) for +=: 'async_generator_asend' and 'async_generator_asend'

Which links to the following LangChain agent code: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L560C18-L561C14

Additional context Exception is thrown across multiple agent types.

audols commented 3 months ago

FYI, this is tied to existing issues with streaming - disabling streaming for the Agent removes this issue, but ideally we'd be able to enable it too.

xrmx commented 3 months ago

@audols Are you sure this is related to opentelemetry at all? We don't have any langchain instrumentation and you haven't installed any instrumentation as far as I can see.

audols commented 3 months ago

@xrmx Yes, this was using OpenTelemetry auto-instrumentation - disabling auto-instrumentation removed the error. We need not necessarily have traces for this part of the code, it would just be nice to have auto-instrumentation enabled w/o the application throwing an error.

On a side note, we are using Traceloop for Langchain-side instrumentation.

xrmx commented 3 months ago

@audols What I am saying is that it is not opentelemetry-instrument that patches your libraries, but the single instrumentations do. So I would take a look at what the langchain instrumentation is doing instead.

audols commented 3 months ago

@xrmx Interesting, I removed the opentelemetry-instrumentation-langchain library from my build and still received the same error, so I thought for sure it had to be tied to the original OTel instrumentor interacting with LangChain. I can go ahead and open the issue on that side instead though - thanks for the guidance.

xrmx commented 2 months ago

@audols do you have any update on this?

audols commented 2 months ago

@xrmx I went ahead and opened a follow-up issue on the Traceloop repo, I'll close this issue and follow up here with any updates: https://github.com/traceloop/openllmetry/issues/1544.