Open DiogoPM9 opened 1 week ago
I have this same Issue
@DiogoPM9
Thanks for reporting this bug. I can reproduce the issue in my testing. The issue seems to be because of latencyMs
attribute present in the Bedrock responses. While it's ok to for this to be present in the response, it's type int
is something not supported in response_metadata
when messages are merged, only dict, list or string is allowed. Updating the latencyMs value to a list should fix this issue.
File "/Users/pijain/projects/langchain-aws-dev/langchain-aws/libs/aws/langchain_aws/chat_models/bedrock_converse.py", line 682, in _messages_to_bedrock
messages = merge_message_runs(messages)
....
File "/Users/pijain/Library/Caches/pypoetry/virtualenvs/langchain-aws-eH7P7gjZ-py3.10/lib/python3.10/site-packages/langchain_core/utils/_merge.py", line 59, in merge_dicts
merged[right_k] = merge_dicts(merged[right_k], right_v)
File "/Users/pijain/Library/Caches/pypoetry/virtualenvs/langchain-aws-eH7P7gjZ-py3.10/lib/python3.10/site-packages/langchain_core/utils/_merge.py", line 65, in merge_dicts
raise TypeError(
TypeError: Additional kwargs key latencyMs already exists in left dict and value has unsupported type <class 'int'>.
Seems like we will need a change in this function, to update the latencyMs
from int to list.
Here is a simplified solution, however we might need to handle other keys that might cause issue while merging messages as well.
response["metrics"]["latencyMs"] = [response["metrics"]["latencyMs"]]
Requirements:
I was developing a multi-agent workflow using Langchain and Langgraph.
One of my requirement was that the LLMs had to be provided by AWS Bedrock.
However, I noticed an issue with ChatBedrockConverse, where the invocation stops due to an issue with merging dictionaries.
As I was debugging the code, I replace my LLM with one from OpenAI and that fixed the issue I had., however, since I have the stated requirement , this is not only not a solution, but seems to identify a problem in langchain-aws specifically.
Code used:
The graph is as below:
As you can see, if you choose to use ChatOpenAI, the code will execute smoothly, however, when you change to ChatBerockConverse, the invocation breaks.
When the graph is invoked, the architect successfully call the Tools worker. However, when the architect receives the tool call output, it breaks. The output was the following: