Open Parker-Stafford opened 2 weeks ago
Hello - I'm working with @codefromthecrypt on using langchain telemetry. I filed this discussion in langchain about allowing context propagation within callback handlers which I think is required for a proper approach to fix this, feel free to chime in if appropriate.
https://github.com/langchain-ai/langchain/discussions/27954
In the meantime, it would be possible to autoinstrument entry points such as BaseLLM
to activate context via run_id
, but ideally the library provided a cleaner mechanism for it.
Hello - I'm working with @codefromthecrypt on using langchain telemetry. I filed this discussion in langchain about allowing context propagation within callback handlers which I think is required for a proper approach to fix this, feel free to chime in if appropriate.
In the meantime, it would be possible to autoinstrument entry points such as
BaseLLM
to activate context viarun_id
, but ideally the library provided a cleaner mechanism for it.
Hey @anuraaga yeah I started taking a look at this yesterday, was trivial to get nesting correct in openai so look for that soon re issue #1061. But langchain, as you mentioned is not. We currently don't instrument any of the entry points or methods, because there are so many and they are changing all the time. But instead hook into the callback handlers like you mentioned. This make context propagation non trivial. Thanks for starting the discussion with them, will follow along in the thread and see if we can get a fix in when they support it.
Note this is partially resolved in #1121 will keep this open for tracking any changes on the langchain front. For future context, using a proxy I was able to get properly get context and wrap original function calls with context.with however, since generate
requests to llms are queued in langchain and not executed till later the context was lost by the time openai (for example) was actually called.
Describe the bug When using multiple auto instrumentors, spans from distinct auto-instrumentors show up in different traces
To Reproduce
Additional context See details below specifically on difficulties with this surrounding langchain See comments here https://github.com/Arize-ai/openinference/issues/1062#issuecomment-2456467995