Open lucasmaraal opened 2 months ago
ah, this is likely an issue w/ gemini -
llm = ChatVertexAI(model="gemini-1.5-flash", temperature=0)
related to requirements on message order. i will need to look into it (have not worked extensively w/ Gemin).
Thanks for the upadate. I am also trying to investigate the message order, but have not founded nothing documentated about it so far.
@lucasmaraal could you elaborate/clarify what you're trying to understand regarding message order? Do you mean in what order are the messages being put into the state? Maybe this can give you some more insight on how messages are being adjusted within the state, if that's what you are looking for?
@shiv248, I am trying to undestarding what i have to tweak in the graph from lesson to have it working with Gemni. I don't know if this is out scope, but I am think my doubt can be realated to the course.
Ya, I got Vertex credentials and confirmed that I can repro this - https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r
Here's the commit - https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb
It's specific to adding the breakpoint -
interrupt_before=["assistant"]
For example, if you compile w/o the breakpoint it works as expected.
Checking w/ some folks on our side in case it's an issue w/ the integration.
(Obviously, course is tested w/ OpenAI and Anthropic, so use those to unblock. But this is good confirm no issue w/ Gemini.)
Thanks for look into it. I agreet that is good confirm if it's works with Gemini too.
I noticed that in notebook that you posted the breaking point is in interrupt_before=["assistant"]
, but in the video lesson is in interrupt_before=["human_feedback"]
Ya, I got Vertex credentials and confirmed that I can repro this - https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r
Here's the commit - https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb
It's specific to adding the breakpoint -
interrupt_before=["assistant"]
For example, if you compile w/o the breakpoint it works as expected.
Checking w/ some folks on our side in case it's an issue w/ the integration.
(Obviously, course is tested w/ OpenAI and Anthropic, so use those to unblock. But this is good confirm no issue w/ Gemini.)
I asked folks from Google.
we recently released a course on LangGraph.
some folks have been eager to use Gemini!
some users have seen this error when using human-in-the-loop w/ gemini-1.5-flash --
https://github.com/langchain-ai/langchain-academy/issues/28#issuecomment-2369795773
i can repro it --
https://github.com/langchain-ai/langchain-academy/blob/760e417766163ddd0c1c81e522b2ae3501827fdd/module-3/edit-state-human-feedback.ipynb
here is the trace --
https://smith.langchain.com/public/be0143a5-e54e-4c10-a9b4-146956e9f17a/r
we have HumanMessage -> AIMessage -> ToolMesage -> failure:
google.api_core.exceptions.InvalidArgument: 400 Please ensure that function call turn comes immediately after a user turn or after a function response turn.
was curious if anyone who has deeper context on gemini may be able to debug what is going on.
Thanks for look into it. I agreet that is good confirm if it's works with Gemini too.
I noticed that in notebook that you posted the breaking point is in
interrupt_before=["assistant"]
, but in the video lesson is ininterrupt_before=["human_feedback"]
We do this at the bottom section of the ntbk, Awaiting user input
.
I think i am getting a similar error using gpt-4o-mini in Module 5: Lesson 5 #59
Steps to reproduce:
Running the follow snippets will reproduce the issue:
Full trace:
Expected Behavior:
The graph executes and returns an AIMessage, like in the tutorial.
**Environment***:
python: 3.12.3 langgraph-checkpoint-sqlite: 1.0.3 langgraph: 0.2.22 langchain-google-vertexai: 2.0.0 vertexai: 1.67.0 langchain-openai: 0.2.0