langchain-ai / langchainjs

πŸ¦œπŸ”— Build context-aware reasoning applications πŸ¦œπŸ”—
https://js.langchain.com/docs/
MIT License
12.42k stars 2.1k forks source link

DOC: Please provide a proper documentation for the streamEvents endpoint. #4987

Closed SebastianStehle closed 2 months ago

SebastianStehle commented 6 months ago

Checklist

Issue with current documentation:

I am testing different models, mainly vertex ai, mistral and open ai using streaming.

I have seen different events and would like to understand the difference between these events. There are so many cases that I do not really understand:

  1. What is the difference between on_chain_stream and on_llm_stream?
  2. Why is the final response not made available as a on_chain_stream or on_llm_stream event?
  3. What can we expect for the content of these events. I have seen:
    • AIMessageChunk
    • ChatGenerationChunk
    • strings

Idea or request for content:

Documentation should contain a list / table with all events, including description and possible data types. I don't understand this table: https://js.langchain.com/docs/expression_language/streaming#event-reference

Ideally:

dosubot[bot] commented 6 months ago

Hey @SebastianStehle, great to see you diving deep into LangChain JS again! How's everything going on your end?

I understand your concerns about the on_chain_stream and on_llm_stream events in the LangChain JS framework. Let's clarify these points:

  1. Difference between on_chain_stream and on_llm_stream events: The on_chain_stream event is emitted during the execution of a chain, specifically when an intermediate chunk of data is processed within the chain. It signifies that a part of the chain's logic has been executed, and an intermediate result is available. On the other hand, the on_llm_stream event, if present, would relate to streaming data from a language model (LLM) specifically. This would involve intermediate outputs generated directly by the language model before any additional processing or logic applied by the chain. You can refer to this discussion for more details.

  2. Final response availability: The final response is not available as either on_chain_stream or on_llm_stream events because these events are designed to emit intermediate data chunks during the execution process. The final response is typically available at the end of the chain's execution, marked by events such as on_chat_model_end and on_chain_end. These events signify the completion of the model's generation and the chain's execution, respectively, and carry the final output data.

  3. Content to expect from these events: Both events can include various types of data, such as AIMessageChunk, ChatGenerationChunk, and plain strings. The AIMessageChunk is specifically mentioned in the context of the on_chain_stream event, indicating that this event can carry chunks of AI-generated messages with attributes like content, name, additional_kwargs, and response_metadata. A ChatGenerationChunk would likely represent a piece of generated content from a chat-based interaction with a language model. It would include similar attributes to AIMessageChunk but be specifically tailored to chat interactions.

Regarding your suggestion for improving the documentation, I agree that a detailed list or table of all events, including descriptions and possible data types, would be beneficial. However, as a technical support rep, I don't have the ability to make changes to the repository or its documentation. I recommend creating a new issue in the repository to suggest this improvement, providing as much detail as possible about your use case and the challenges you've encountered.

For more information about the on_llm_stream event, you can refer to this test case for a FakeStreamingLLM model. For the on_chain_stream event, you can refer to this example.

I hope this helps clarify your concerns. If you have any more questions, feel free to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.