Open DiogoPM9 opened 6 days ago
@DiogoPM9 The problem with the assistant not working with the tool response might be because it has no knowledge of the tools (no bind_tools on assistant LLM).
Also, I feel like the graph here can be simplified by using the prebuilt react agent. For example, see here. https://langchain-ai.github.io/langgraph/how-tos/tool-calling/#react-agent
Here is another example that showcases how to use the interrupt
feature in graph to get human input.
https://langchain-ai.github.io/langgraph/how-tos/human_in_the_loop/wait-user-input/
@3coins Thank you for the reply!
My issue with your first point is that I am binding tools to an LLM that will never use them. As per this Langgraph guide: Multi-agent supervisor, the supervisor node does not have access to any tools. Additionally the code works with an OpenAI LLM, hence my concern that there is something specific about ChatBedrockCovnerse
. Perhaps this is just how Anthropic or Mistral models are configured?
Regarding your other two points, absolutely agree, I have since implemented some changes that simplify the code, however the pre-built react agent is quite restrictive with custom workflows so I avoided it.
EDIT:
I just tried binding the tools to the architect, as expected, it becomes confused and instead of delegating the task to other nodes it makes tool calls (which is not meant to do).
We've found the same issue.
Bedrock (Converse in my case) requires that the tool definitions to complete a chat, are sent in every inference request.
It is not satisfied that you reference a tool result, in a chat that:
This seems like a Bedrock bug honestly.
@DiogoPM9 @tysoekong Thanks for those inputs. I am going to work on a simplified sample to investigate this further. This could be a Bedrock service issue, but I think if we can come up with a simple reproducible example, it will be easier to find a workaround (short-term) or work with the Bedrock team to figure out a path forward (long term). I have added this issue to the next milestone, due next Thursday, hopefully will have something concrete in a few days.
Requirements:
The code of this issue is also present in #223. After the referenced issue was fixed and subsequently closed I attempted to execute the code in the issue and came across another error:
The assistant node successfully routes the query to the node that is powered by an LLM with tools. The tool call is successfully executed and the output added to state. For some reason, the assistant cannot handle the response of the tool call.
My thought was that perhaps the issue was being caused because the tool call output is wrapped in a ToolMessage object and the assistant node does not have access to any tools. However, the following attempts to resolve the issue did not work:
ValidationException: An error occurred (ValidationException) when calling the Converse operation: The model that you are using requires the last turn in the conversation to be a user message. Add a user message to the conversation and try again.
ValidationException: An error occurred (ValidationException) when calling the Converse operation: messages.1.content: Conversation blocks and tool use blocks cannot be provided in the same turn.
ValidationException: An error occurred (ValidationException) when calling the Converse operation: messages.1.content: Conversation blocks and tool use blocks cannot be provided in the same turn.
Code:
Output:
Therefore, this issue was opened since there does not seem to be a reasonable way (not present in the documentation) to pass information through the graph.