Open labdmitriy opened 1 month ago
Also in Studio section we have the following statement:
Our agent is defined in assistant/agent.py.
However there is no path like specified, does it mean that we need to use studio/agent.py
agent that we use in Breakpoints and Streaming sections before?
And also I noticed that in the video after the first run we have AI and Tool message, but in the notebook (and also on the second screenshot above) we have also Human message that we inserted as the first message in the stream.
It seems like after each new graph.stream(None, ...)
we have the first message in the current stream as the duplicate of the last message in the previous stream.
Also I found that the math example in this notebook like No, actually multiply 3 and 3!
is not stable across different version of models - to minimize the cost, I am using gpt-4o-mini
model and sometimes it calculates 3!
as factorial of 3 and then the results in video and notebook are also different here.
The same issue was for gpt-4o-mini in the Chain lesson of Module 1, and your last commit changed the code, but not the output:
I found older package versions where I don't have such issues with event duplicates:
System Information
------------------
> OS: Linux
> OS Version: #132~20.04.1-Ubuntu SMP Fri Aug 30 15:50:07 UTC 2024
> Python Version: 3.11.5 (main, Sep 11 2023, 13:32:41) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.131
> langchain_anthropic: 0.1.23
> langchain_experimental: 0.0.65
> langchain_openai: 0.1.25
> langchain_text_splitters: 0.2.4
> langchainhub: 0.1.21
> langgraph: 0.2.5
> langserve: 0.2.3
Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.35.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> langgraph-checkpoint: 1.0.12
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pyproject-toml: 0.0.10
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> sse-starlette: Installed. No version info available.
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240712
> typing-extensions: 4.12.2
Is it expected change in streaming behavior or not?
Thank you.
Also for the second case (Awaiting user input) I found that there are different diagrams in the video and notebook. Video:
Notebook (we have extra conditional edge from assistant
to human_feedback
):
The second case is strange because we have the same code for the conditional edge with tools_condition
from assistant
node, but somehow we have 3 conditional edges from assistant
, not 2 edges as usual.
It seems that there are many changes in langgraph logic and structure for the last month and there are multiple inconsistencies in previous and current behavior (or there is documentation about such changes but I didn't find information about any of these changes).
Could you please help with it?
Hi @rlancemartin,
What is interesting that duplicated events do not appear for stream_mode="updates", got this issue for stream_mode="values". Maybe this will help.
Thank you.
Hi @rlancemartin,
I am watching "Lesson 4: Research Assistant" from Module 4 and found that when you resume the graph execution there using stream_mode="values", then you already have the same behavior of this recent stream_mode implementation that I described above, and which is different from the behavior in all of your previous videos in the course.
Thank you.
It seems that there are many changes in langgraph logic and structure for the last month and there are multiple inconsistencies in previous and current behavior (or there is documentation about such changes but I didn't find information about any of these changes).
Could you please help with it?
Hi @labdmitriy. I will confirm with @nfcampos. IIRC, we indeed made changes to streaming so that the current state is emitted when proceeding with None
from a breakpoint as you note here:
https://github.com/langchain-ai/langchain-academy/issues/39#issuecomment-2397272984
Hi @rlancemartin,
Thanks a lot for your response.
If emitting of the current state is the new expected behavior for streaming mode "values", then not only it is inconsistent with the multiple videos (that it is ok because the code is changing very fast), but also it is inconsistent with another streaming modes, for example "updates" which has an old and more expected (in my opinion) behavior.
Therefore, for clarity, I am changing any occurrences of "values" streaming to "updates" streaming in your awesome lessons in LangChain Academy, to be consistent with both your videos and old consistent behavior between all streaming modes.
One of the examples that seems to me confusing is when you stream the graph to the end and invoke the graph again to continue the execution (graph.stream(None, ...)
) then now for streaming mode "values" you will always see the last event from the last step of the graph, while old behavior will return None.
Thank you.
Hi @rlancemartin,
Thanks a lot for your response.
If emitting of the current state is the new expected behavior for streaming mode "values", then not only it is inconsistent with the multiple videos (that it is ok because the code is changing very fast), but also it is inconsistent with another streaming modes, for example "updates" which has an old and more expected (in my opinion) behavior.
Therefore, for clarity, I am changing any occurrences of "values" streaming to "updates" streaming in your awesome lessons in LangChain Academy, to be consistent with both your videos and old consistent behavior between all streaming modes.
One of the examples that seems to me confusing is when you stream the graph to the end and invoke the graph again to continue the execution (
graph.stream(None, ...)
) then now for streaming mode "values" you will always see the last event from the last step of the graph, while old behavior will return None.Thank you.
Thanks! If you put up a PR with this change, I can review. Also we can add a notification to the notebook to highlight that the change in streaming was done, and that videos were filmed with an earlier version of the library that did emit current state. From discussion with @nfcampos this is why me made the change:
It was a necessary change to make subgraphs work as intended. Otherwise when a subgraph would resume it could end up emitting nothing, and thus never applying its final result to the parent graph
Hi @rlancemartin, Thanks a lot for your response. If emitting of the current state is the new expected behavior for streaming mode "values", then not only it is inconsistent with the multiple videos (that it is ok because the code is changing very fast), but also it is inconsistent with another streaming modes, for example "updates" which has an old and more expected (in my opinion) behavior. Therefore, for clarity, I am changing any occurrences of "values" streaming to "updates" streaming in your awesome lessons in LangChain Academy, to be consistent with both your videos and old consistent behavior between all streaming modes. One of the examples that seems to me confusing is when you stream the graph to the end and invoke the graph again to continue the execution (
graph.stream(None, ...)
) then now for streaming mode "values" you will always see the last event from the last step of the graph, while old behavior will return None. Thank you.Thanks! If you put up a PR with this change, I can review. Also we can add a notification to the notebook to highlight that the change in streaming was done, and that videos were filmed with an earlier version of the library that did emit current state. From discussion with @nfcampos this is why me made the change:
It was a necessary change to make subgraphs work as intended. Otherwise when a subgraph would resume it could end up emitting nothing, and thus never applying its final result to the parent graph
Thank you, but I don't understand then why stream="values" is changed, but not "updates" mode?
Also if working with subgraphs is special case, which mode to use in which cases, if we have 5 officialy supported modes now ("values", "updates", "custom", "messages" and "debug")?
Does each mode has its own unique impact on the behavior of the graph (like for subgraphs case) and which mode's behavior was changed?
I can make changes for streaming mode and create PR but it would be great to understand the difference between modes and if we have now different behavior of the modes, which is the best mode for this case.
Also for the second case (Awaiting user input) I found that there are different diagrams in the video and notebook. Video:
Notebook (we have extra conditional edge from
assistant
tohuman_feedback
):The second case is strange because we have the same code for the conditional edge with
tools_condition
fromassistant
node, but somehow we have 3 conditional edges fromassistant
, not 2 edges as usual.
Could you also please help with this case, because I tried this case now with the latest packages and it still has the same issue now (3rd conditional edge from assistant
to `human_feedback')?
Hi @rlancemartin,
I am trying to reproduce Lesson 3 in Module 3, and found strange difference in outputs:
graph.stream(None, ...)
we have AI and Tool message, and after the second one - only AI message, and it seems consistent with the graph structure:The first case (in video) seems to be intuitive and correct way to resume, but I can't understand, why do I have duplicated event with Tool message for the second run?
Environment information:
Thank you.