Open satoshim222 opened 3 months ago
Hey there, @satoshim222! I'm here to help you with any bugs, questions, or contributions you have while we wait for a human maintainer. Let's tackle this together!
The issue you're encountering with the ValidationException
in LangChain.js when an AI message contains both content and tool usage can be resolved by ensuring that the AIMessage
class correctly handles the initialization of tool_calls
and invalid_tool_calls
fields. Additionally, the conversion logic in convertToConverseMessages
should correctly handle the AIMessage
and ToolMessage
types.
Here are the key points to check and update:
Initialization of AIMessage
Fields:
Ensure that the AIMessage
constructor correctly initializes the tool_calls
and invalid_tool_calls
fields. Here is the relevant part of the AIMessage
constructor:
constructor(
fields: string | AIMessageFields,
/** @deprecated */
kwargs?: Record<string, unknown>
) {
let initParams: AIMessageFields;
if (typeof fields === "string") {
initParams = {
content: fields,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: kwargs ?? {},
};
} else {
initParams = fields;
const rawToolCalls = initParams.additional_kwargs?.tool_calls;
const toolCalls = initParams.tool_calls;
if (
!(rawToolCalls == null) &&
rawToolCalls.length > 0 &&
(toolCalls === undefined || toolCalls.length === 0)
) {
console.warn(
[
"New LangChain packages are available that more efficiently handle",
"tool calling.\n\nPlease upgrade your packages to versions that set",
"message tool calls. e.g., `yarn add @langchain/anthropic`,",
"yarn add @langchain/openai`, etc.",
].join(" ")
);
}
try {
if (!(rawToolCalls == null) && toolCalls === undefined) {
const [toolCalls, invalidToolCalls] =
defaultToolCallParser(rawToolCalls);
initParams.tool_calls = toolCalls ?? [];
initParams.invalid_tool_calls = invalidToolCalls ?? [];
} else {
initParams.tool_calls = initParams.tool_calls ?? [];
initParams.invalid_tool_calls = initParams.invalid_tool_calls ?? [];
}
} catch (e) {
// Do nothing if parsing fails
initParams.tool_calls = [];
initParams.invalid_tool_calls = [];
}
}
super(initParams);
if (typeof initParams !== "string") {
this.tool_calls = initParams.tool_calls ?? this.tool_calls;
this.invalid_tool_calls =
initParams.invalid_tool_calls ?? this.invalid_tool_calls;
}
this.usage_metadata = initParams.usage_metadata;
}
Conversion Logic in convertToConverseMessages
:
Ensure that the conversion function correctly handles the AIMessage
and ToolMessage
types. Here is the relevant part of the conversion function:
export function convertToConverseMessages(messages: BaseMessage[]): {
converseMessages: BedrockMessage[];
converseSystem: BedrockSystemContentBlock[];
} {
const converseSystem: BedrockSystemContentBlock[] = messages
.filter((msg) => msg._getType() === "system")
.map((msg) => {
const text = msg.content;
if (typeof text !== "string") {
throw new Error("System message content must be a string.");
}
return { text };
});
const converseMessages: BedrockMessage[] = messages
.filter((msg) => !["system", "tool", "function"].includes(msg._getType()))
.map((msg) => {
if (msg._getType() === "ai") {
const castMsg = msg as AIMessage;
if (typeof castMsg.content === "string") {
return {
role: "assistant",
content: [
{
text: castMsg.content,
},
],
};
} else {
if (castMsg.tool_calls && castMsg.tool_calls.length) {
return {
role: "assistant",
content: castMsg.tool_calls.map((tc) => ({
toolUse: {
toolUseId: tc.id,
name: tc.name,
input: tc.args,
},
})),
};
} else {
const contentBlocks: ContentBlock[] = castMsg.content.map(
(block) => {
if (block.type === "text") {
return {
text: block.text,
};
} else {
throw new Error(
`Unsupported content block type: ${block.type}`
);
}
}
);
return {
role: "assistant",
content: contentBlocks,
};
}
}
} else if (msg._getType() === "human" || msg._getType() === "generic") {
if (typeof msg.content === "string") {
return {
role: "user",
content: [
{
text: msg.content,
},
],
};
} else {
const contentBlocks: ContentBlock[] = msg.content.flatMap((block) => {
if (block.type === "image_url") {
const base64: string =
typeof block.image_url === "string"
? block.image_url
: block.image_url.url;
return extractImageInfo(base64);
} else if (block.type === "text") {
return {
text: block.text,
};
} else {
throw new Error(`Unsupported content block type: ${block.type}`);
}
});
return {
role: "user",
content: contentBlocks,
};
}
} else if (msg._getType() === "tool") {
const castMsg = msg as ToolMessage;
if (typeof castMsg.content === "string") {
return {
role: undefined,
content: [
{
toolResult: {
toolUseId: castMsg.tool_call_id,
content: [
{
text: castMsg.content,
},
],
},
},
],
};
} else {
return {
role: undefined,
content: [
{
toolResult: {
toolUseId: castMsg.tool_call_id,
content: [
{
json: castMsg.content,
},
],
},
},
],
};
}
} else {
throw new Error(`Unsupported message type: ${msg._getType()}`);
}
});
return { converseMessages, converseSystem };
}
By ensuring that the AIMessage
and ToolMessage
types are correctly handled and initialized, you can prevent the ValidationException
from occurring when an AI message contains both content and tool usage [1][2][3].
To continue talking to Dosu, mention @dosu.
@bracesproul
Hi, any updates here? Happy to debug or provide any more information necessary.
Hi, @satoshim222. I'm helping the LangChain JS team manage their backlog and am marking this issue as stale.
The issue you reported involves a ValidationException
occurring in the ChatBedrockConverse
function when a ToolMessage
is returned after an AI Message. Dosubot provided a potential resolution, indicating that proper initialization of the AIMessage
class fields and correct handling in the convertToConverseMessages
function could resolve the problem. You've also expressed willingness to assist with debugging.
Could you please let us know if this issue is still relevant to the latest version of the LangChain JS repository? If it is, feel free to comment here to keep it open. Otherwise, you can close the issue yourself, or it will be automatically closed in 7 days. Thank you!
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
[31m[llm/error][39m [[90m1:chain:AgentExecutor > 10:chain:ToolCallingAgent > [1m15:llm:ChatBedrockConverse[22m[39m] [230ms] LLM run errored with error: "The number of toolResult blocks at messages.6.content exceeds the number of toolUse blocks of previous turn.\n\nValidationException: The number of toolResult blocks at messages.6.content exceeds the number of toolUse blocks of previous turn.\n at de_ValidationExceptionRes (/var/task/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/index.js:1082:21)\n at de_CommandError (/var/task/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/index.js:937:19)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /var/task/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n at async /var/task/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@smithy/core/dist-cjs/index.js:165:18\n at async /var/task/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n at async /var/task/node_modules/@aws-sdk/client-bedrock-runtime/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:34:22\n at async ChatBedrockConverse._streamResponseChunks (/var/task/node_modules/@langchain/aws/dist/chat_models.cjs:250:26)\n at async ChatBedrockConverse._streamIterator (/var/task/node_modules/@langchain/core/dist/language_models/chat_models.cjs:93:34)\n at async ChatBedrockConverse.transform (/var/task/node_modules/@langchain/core/dist/runnables/base.cjs:394:9)"
Description
This issue seems to be similar to https://github.com/langchain-ai/langchainjs/issues/6173 but this seems to happen when the AI Message has both content and tool_use and toolMessage is constructed after calling the tool.
The code that I have succeeds except for the time where tool_use AI message is returned with content.
Agent Executor verbose
Error Message throws after awards
System Info
Node v18.15.0
"@langchain/aws": "^0.0.6", "@langchain/core": "^0.2.18", "@langchain/community": "^0.2.20",