brainlid / langchain

Elixir implementation of a LangChain style framework.
https://hexdocs.pm/langchain/
Other
503 stars 58 forks source link

Unable to stream response from OpenAI after executing tool calls #119

Closed jtormey closed 1 month ago

jtormey commented 1 month ago

Streaming works great normally, however after executing a tool call requested by the LLM (in this case OpenAI) I'm unable to figure out how to stream the final response.

I haven't found a reason why in the code or docs, is this a limitation of OpenAI? If so feel free to close this issue 🙂

Potentially related: https://github.com/brainlid/langchain/issues/10

brainlid commented 1 month ago

Hi @jtormey!

Ohh! I think I starting to understand the request in #10 and probably what you are asking for as well.

If you use |> LLMChain.run(while_needs_response: true) it executes the function, returns the result to the LLM, let's the LLM write and stream their response back.

Is this not what you want?

Or is there a situation where you otherwise DON'T want the LLM to be executed again? Or is there a reason you don't want to use while_needs_response: true?

You could do it manually after a tool result. I believe it would just be running the resulting updated chain 1 time. That's not an awesome experience, so I'd rather make that easier if that's the case.

jtormey commented 1 month ago

Thank you for the response @brainlid, that's actually the exact behavior I want.

After looking at this again today, I realized my issue was a bug in my own code. The process that was receiving data from :callback_fn was exiting after the first %LangChain.Message{} it received, under the assumption that the stream was finished. However that's not the case with while_needs_response: true.

Streaming with tool calls works flawlessly now that I've corrected the error. Closing this as it was my fault 🙂

brainlid commented 1 month ago

Glad you got it working!