Closed kay-es closed 1 month ago
This one is a bit tricky. Langchain final answer streaming is very brittle as langchain (in my understanding) was not built around that concept to begin with.
What you could do is to override cl.AsyncLangchainCallbackHandler
to add custom logic to trigger the stream final answer flag when you specifically reach the tool you know is return_direct=True
.
Hey folks,
Im currently implementing with langchain and chainlit. Its an agent with tools set
return_direct=True
.The streaming feature is activated and seems to work:
My problem is now that I dont want the agent to stream the decision of which tool to choose but the generating output of the tool itself which is inside nested LLM chains:
Is it possible to stream the output of the tools directly instead of waiting for it passing back to agent? the "return_direct" attribute is anyways set to true.
My current implementation of on_message looks like this:
calling the chain directly delivers me the results how I would need it for the tool/chain but not for the agent as shown in the screenshots. Any thoughts or ideas? 🙂