Closed SebastianStehle closed 2 months ago
Hey @SebastianStehle, great to see you diving deep into LangChain JS again! How's everything going on your end?
I understand your concerns about the on_chain_stream
and on_llm_stream
events in the LangChain JS framework. Let's clarify these points:
Difference between on_chain_stream
and on_llm_stream
events: The on_chain_stream
event is emitted during the execution of a chain, specifically when an intermediate chunk of data is processed within the chain. It signifies that a part of the chain's logic has been executed, and an intermediate result is available. On the other hand, the on_llm_stream
event, if present, would relate to streaming data from a language model (LLM) specifically. This would involve intermediate outputs generated directly by the language model before any additional processing or logic applied by the chain. You can refer to this discussion for more details.
Final response availability: The final response is not available as either on_chain_stream
or on_llm_stream
events because these events are designed to emit intermediate data chunks during the execution process. The final response is typically available at the end of the chain's execution, marked by events such as on_chat_model_end
and on_chain_end
. These events signify the completion of the model's generation and the chain's execution, respectively, and carry the final output data.
Content to expect from these events: Both events can include various types of data, such as AIMessageChunk
, ChatGenerationChunk
, and plain strings. The AIMessageChunk
is specifically mentioned in the context of the on_chain_stream
event, indicating that this event can carry chunks of AI-generated messages with attributes like content
, name
, additional_kwargs
, and response_metadata
. A ChatGenerationChunk
would likely represent a piece of generated content from a chat-based interaction with a language model. It would include similar attributes to AIMessageChunk
but be specifically tailored to chat interactions.
Regarding your suggestion for improving the documentation, I agree that a detailed list or table of all events, including descriptions and possible data types, would be beneficial. However, as a technical support rep, I don't have the ability to make changes to the repository or its documentation. I recommend creating a new issue in the repository to suggest this improvement, providing as much detail as possible about your use case and the challenges you've encountered.
For more information about the on_llm_stream
event, you can refer to this test case for a FakeStreamingLLM
model. For the on_chain_stream
event, you can refer to this example.
I hope this helps clarify your concerns. If you have any more questions, feel free to ask.
Checklist
Issue with current documentation:
I am testing different models, mainly vertex ai, mistral and open ai using streaming.
I have seen different events and would like to understand the difference between these events. There are so many cases that I do not really understand:
Idea or request for content:
Documentation should contain a list / table with all events, including description and possible data types. I don't understand this table: https://js.langchain.com/docs/expression_language/streaming#event-reference
Ideally: