Closed jtormey closed 1 month ago
Hi @jtormey!
Ohh! I think I starting to understand the request in #10 and probably what you are asking for as well.
If you use |> LLMChain.run(while_needs_response: true)
it executes the function, returns the result to the LLM, let's the LLM write and stream their response back.
Is this not what you want?
Or is there a situation where you otherwise DON'T want the LLM to be executed again? Or is there a reason you don't want to use while_needs_response: true
?
You could do it manually after a tool result. I believe it would just be running the resulting updated chain 1 time. That's not an awesome experience, so I'd rather make that easier if that's the case.
Thank you for the response @brainlid, that's actually the exact behavior I want.
After looking at this again today, I realized my issue was a bug in my own code. The process that was receiving data from :callback_fn
was exiting after the first %LangChain.Message{}
it received, under the assumption that the stream was finished. However that's not the case with while_needs_response: true
.
Streaming with tool calls works flawlessly now that I've corrected the error. Closing this as it was my fault 🙂
Glad you got it working!
Streaming works great normally, however after executing a tool call requested by the LLM (in this case OpenAI) I'm unable to figure out how to stream the final response.
I haven't found a reason why in the code or docs, is this a limitation of OpenAI? If so feel free to close this issue 🙂
Potentially related: https://github.com/brainlid/langchain/issues/10