[X] I added a very descriptive title to this issue.
[X] I searched the LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangChain rather than my code.
[X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
def fake_chain(inputs: dict) -> dict:
return {**inputs, "key": "extra"}
def on_start(run: Run):
print("on_start:", run.inputs)
def on_end(run: Run):
print("on_end: ", run.outputs)
chain = RunnableLambda(fake_chain).with_listeners(on_end=on_end, on_start=on_start)
chain = chain.map()
data = [{"name": "one"}, {"name": "two"}]
out = chain.invoke(data, config={"max_concurrency": 1})
print("result: ", out)
max_concurrency is added for simplicity.
Error Message and Stack Trace (if applicable)
No response
Description
I want to store fake_chain output using listeners. with_listeners() allows to hook only top level runnable (according to its docstring). But run object is incorrect if use map().
Checked other resources
Example Code
max_concurrency
is added for simplicity.Error Message and Stack Trace (if applicable)
No response
Description
I want to store
fake_chain
output using listeners.with_listeners()
allows to hook only top level runnable (according to its docstring). Butrun
object is incorrect if usemap()
.I expect to see
but get
I didn't dive deeper, but smth wrong happens in the
RunnableBindingBase.batch() -> _merge_configs()
(a guess).System Info
platform:
linux
python:3.11.8