Open labdmitriy opened 2 days ago
Also in Studio section we have the following statement:
Our agent is defined in assistant/agent.py.
However there is no path like specified, does it mean that we need to use studio/agent.py
agent that we use in Breakpoints and Streaming sections before?
And also I noticed that in the video after the first run we have AI and Tool message, but in the notebook (and also on the second screenshot above) we have also Human message that we inserted as the first message in the stream.
It seems like after each new graph.stream(None, ...)
we have the first message in the current stream as the duplicate of the last message in the previous stream.
Also I found that the math example in this notebook like No, actually multiply 3 and 3!
is not stable across different version of models - to minimize the cost, I am using gpt-4o-mini
model and sometimes it calculates 3!
as factorial of 3 and then the results in video and notebook are also different here.
The same issue was for gpt-4o-mini in the Chain lesson of Module 1, and your last commit changed the code, but not the output:
I found older package versions where I don't have such issues with event duplicates:
System Information
------------------
> OS: Linux
> OS Version: #132~20.04.1-Ubuntu SMP Fri Aug 30 15:50:07 UTC 2024
> Python Version: 3.11.5 (main, Sep 11 2023, 13:32:41) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.41
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.131
> langchain_anthropic: 0.1.23
> langchain_experimental: 0.0.65
> langchain_openai: 0.1.25
> langchain_text_splitters: 0.2.4
> langchainhub: 0.1.21
> langgraph: 0.2.5
> langserve: 0.2.3
Other Dependencies
------------------
> aiohttp: 3.10.5
> anthropic: 0.35.0
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: Installed. No version info available.
> httpx: 0.27.0
> jsonpatch: 1.33
> langgraph-checkpoint: 1.0.12
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> pyproject-toml: 0.0.10
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.32
> sse-starlette: Installed. No version info available.
> tenacity: 8.5.0
> tiktoken: 0.7.0
> types-requests: 2.32.0.20240712
> typing-extensions: 4.12.2
Is it expected change in streaming behavior or not?
Thank you.
Also for the second case (Awaiting user input) I found that there are different diagrams in the video and notebook. Video:
Notebook (we have extra conditional edge from assistant
to human_feedback
):
The second case is strange because we have the same code for the conditional edge with tools_condition
from assistant
node, but somehow we have 3 conditional edges from assistant
, not 2 edges as usual.
It seems that there are many changes in langgraph logic and structure for the last month and there are multiple inconsistencies in previous and current behavior (or there is documentation about such changes but I didn't find information about any of these changes).
Could you please help with it?
Hi @rlancemartin,
I am trying to reproduce Lesson 3 in Module 3, and found strange difference in outputs:
graph.stream(None, ...)
we have AI and Tool message, and after the second one - only AI message, and it seems consistent with the graph structure:The first case (in video) seems to be intuitive and correct way to resume, but I can't understand, why do I have duplicated event with Tool message for the second run?
Environment information:
Thank you.