bsorrentino / langgraph4j

🚀 LangGraph for Java. A library for building stateful, multi-actor applications with LLMs, built for work jointly with langchain4j
https://bsorrentino.github.io/langgraph4j/
MIT License
106 stars 19 forks source link

FEAT: Use streaming response to output llm results #31

Open bsorrentino opened 1 month ago

bsorrentino commented 1 month ago

Related to this question in #30

Enable the possibility to use the langchain4j's StreamingChatLanguageModel instead of ChatLanguageModel

TyCoding commented 2 weeks ago

Hi, is there any progress on this issue? Or, if a non-serializable object needs to be injected into a node in Worflows, such as WebSocket, SseEmitter, etc., which are all non-serializable, but the content needs to be pushed to the outside in real time during the execution of the node, is there any other solution?

bsorrentino commented 1 week ago

Hi, is there any progress on this issue? Or, if a non-serializable object needs to be injected into a node in Worflows, such as WebSocket, SseEmitter, etc., which are all non-serializable, but the content needs to be pushed to the outside in real time during the execution of the node, is there any other solution?

Hi @TyCoding Currently to implement such feature we have to consider that :

  1. NodeAction should be able to return an AsyncIterator.
  2. NodeAction should accept another argument (ie RunnableConfig) to decide if or not use streaming as result.
  3. connects the AsyncStream to the LLM streaming result, that in case of langchain4j is StreamingResponseHandler.
  4. propagate the node streaming result over graph streaming execution

It is not so simple and, potentially, could break backward compatibility so I'm evaluating the best strategy before start implementation

TyCoding commented 1 week ago

@bsorrentino Thanks, looking forward to this feature

sharfy commented 4 days ago

up