Closed thiagotps closed 1 week ago
Hi - This latest PR should allow you to make use of stop_reason
and tool_calls
in the AIMessage's response metadata.
Appreciate the work you've done here. With regard to "tool_calls" being populated, I am indeed now seeing expected formatting. However, I have an implementation mimicking the documentation for LangGraph collaborative multi-agent system and seeing odd behavior when using ChatBedrock over ChatOpenAI. In my case, in addition to the entry point node not calling the correct tool, tools are not actually being called at all and I'm having difficulty discerning the root cause. On top of this, I'm getting a "list index out of range" here:
File "/usr/local/lib/python3.11/site-packages/langchain_aws/llms/bedrock.py", line 235, in prepare_output
text = content[0].get("text")
~~~~~~~^^^
IndexError: list index out of range
I have verified that simply swapping to the ChatOpenAI model resolves the issue and tool calling works as expected.
If this requires a standalone issue, please let me know and I will get that sorted.
There's a few separate issues, including my own, that was affected by this in different capacities so commenting here as well for visibility. @thiagotps I have created a PR against the fork @bigbernnn created which sets the tool_calls as you have suggested:
The current implementation for function calling with Anthropic models only seems to include the available tools in the system prompt. It does not add
</function_calls>
to the liststop_sequences
and does not make use of thestop_reason
andstop_sequence
values returned by the Bedrock API. Consequently, it does not add thetool_calls
value in the AIMessage, as expected by LangGraph .