Closed talhaFayyazfolio closed 4 weeks ago
The error you're encountering, "No tools_call in message," suggests that the input message does not prompt any tool calls, likely because it doesn't match the expected patterns or the structured output does not include tool calls. To address this, ensure your input message is designed to trigger tool calls and that the schema passed to withStructuredOutput
accurately represents the expected structure of the response from ChatVertexAI.
Here's a step-by-step approach to troubleshoot and potentially resolve the issue:
Verify Schema Compatibility: Make sure the schema you're using with withStructuredOutput
matches the expected output from ChatVertexAI. The error might be due to a mismatch between the actual output and the expected schema defined by your Zod object.
Check Configuration and Usage: Ensure that the withStructuredOutput
function is correctly configured. The name used in the configuration should match the expected function name in your ChatVertexAI setup, and the model should be capable of producing output that conforms to your schema.
Debugging and Error Handling: Implement error handling around your withStructuredOutput
call to catch any issues during execution. Logging the raw output from ChatVertexAI can help you understand how it differs from your schema expectations.
Adjust According to Raw Output: If necessary, adjust your schema or processing logic based on the raw output from ChatVertexAI to ensure compatibility.
Here is a simplified example of how you might adjust your code:
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
const calculatorSchema = z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
number1: z.number(),
number2: z.number(),
});
const modelWithStructuredOutput = chatOpenAIInstance.withStructuredOutput(calculatorSchema, {
name: "calculator",
includeRaw: true, // Optional: Include raw output for debugging
});
Ensure your input message and schema are correctly aligned with what ChatVertexAI is expected to produce. If the issue persists, providing more details about when the error occurs could help further diagnose the problem.
To continue the conversation, mention @dosu.
This seems odd. But looking into it.
The system info dependencies list doesn't show any of the @langchain/google-* modules. Can you confirm which ones are installed and their versions?
@afirstenberg sure
Same problem here. When running in verbose mode it looks like the response when using withStructuredInput is an empty string.
[llm/start] [1:llm:ChatGoogleStructuredOutput] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "What is 1628253239 times 81623836?",
"additional_kwargs": {},
"response_metadata": {}
}
}
]
]
}
[llm/end] [1:llm:ChatGoogleStructuredOutput] [1.39s] Exiting LLM run with output: {
"generations": [
[
{
"text": "",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "",
"additional_kwargs": {},
"tool_call_chunks": [],
"tool_calls": [],
"invalid_tool_calls": [],
"response_metadata": {
"usage_metadata": {},
"safety_ratings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"probability_score": 0.12973313,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.09912086
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"probability_score": 0.22592856,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.13591877
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"probability_score": 0.16545822,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.08632348
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"probability_score": 0.08299415,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.05155819
}
]
}
}
}
}
]
],
"llmOutput": {
"usage_metadata": {},
"safety_ratings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"probability_score": 0.12973313,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.09912086
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"probability_score": 0.22592856,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.13591877
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"probability_score": 0.16545822,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.08632348
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"probability_score": 0.08299415,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severity_score": 0.05155819
}
]
}
}
same issue here
Same issue here, same problem as well. Works fine without using model.withStructuredOutput
. I get the following error message:
⨯ Error: No tools_call in message [{"message":{"lc":1,"type":"constructor","id":["langchain_core","messages","AIMessageChunk"],"kwargs":{"content":"","additional_kwargs":{},"tool_call_chunks":[],"tool_calls":[],"invalid_tool_calls":[],"response_metadata":{"usage_metadata":{},"safety_ratings":[{"category":"HARM_CATEGORY_HATE_SPEECH","probability":"NEGLIGIBLE","probability_score":0.05738754,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.049681153},{"category":"HARM_CATEGORY_DANGEROUS_CONTENT","probability":"NEGLIGIBLE","probability_score":0.09877259,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.09301681},{"category":"HARM_CATEGORY_HARASSMENT","probability":"NEGLIGIBLE","probability_score":0.089136936,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.046378203},{"category":"HARM_CATEGORY_SEXUALLY_EXPLICIT","probability":"NEGLIGIBLE","probability_score":0.116965584,"severity":"HARM_SEVERITY_NEGLIGIBLE","severity_score":0.043528143}]}}},"text":""}]
While the name of the fix doesn't look like it should address it, #5571 should fix the issue. Thanks @talhaFayyazfolio for the test case, which made it much easier to verify.
If you can, check out the patch to see if it looks good to you before merged.
@afirstenberg thanks for this fix. It looks good to me .
Thank you so much @afirstenberg!
Fix is live in 0.0.17 of your @langchain/google-*
package!
Tested with the new live version in 0.0.17. If I do not attach a file input, the withOutputParser works great! However, if I attach a file, I am getting an error when I try to invoke the function. Example code below:
const zodChatMessage = z.object({
chatMessage: z.string().optional().describe("the chat message answer responding to the user's question"),
chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
chatPage: z.number().optional().describe('the page number where this answer was extracted from')
});
const model = new ChatVertexAI({
temperature: 0.7,
model: 'gemini-1.5-pro',
}).withStructuredOutput(zodChatMessage);
const inputContent: MessageContent = [{ type: 'text', text: 'What is the powerhouse of the cell?' }];
inputContent.push({ type: 'image_url', image_url: `data:application/pdf;base64,${pdfFileString}` });
const input = [new HumanMessage({ content: inputContent })];
const response = await model.invoke(input);
console.log(response);
This looks to be the relevant error message:
errors: [
{
message: 'Unable to submit request because Function Calling is not supported with non-text input. Remove the function declarations or remove inline_data/file_data from contents. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling',
domain: 'global',
reason: 'badRequest'
}
],
From the sound of it, the issue seems to be with VertexAI / Gemini itself. However, in my mind it would seem a little odd if this is the case - This seems like it would be something relatively important.
I was hoping we can confirm that this is expected behavior from the withStructuredOutputParser
as well? It would be really nice if you could use the same chain with and without input files!
Okay actually it looks like this needs to be opened back up. I am getting the No tools_call in message
once again when attaching file types that are of plain text, which should be supported by Gemini. The following code results in a No tools_call in message
. Note the change of file format from pdf to csv.
const zodChatMessage = z.object({
chatMessage: z.string().describe("your response to the input message."),
// chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
// chatPage: z.number().optional().describe('the page number or file where this answer was extracted from')
});
const model = new ChatVertexAI({
temperature: 0.7,
model: 'gemini-1.5-pro',
authOptions: { credentials: googleServiceAccountDevelopment, projectId: 'datum-development-403722' }
}).withStructuredOutput(zodChatMessage);
const inputContent: MessageContent = [{ type: 'text', text: 'Can you read this file? If so, please describe whats happening here.' }];
const inputCSV = readFileSync('/path/to/my/file.csv').toString('base64');
inputContent.push({ type: 'image_url', image_url: `data:text/csv;base64,${inputCSV}` });
const input = [new HumanMessage({ content: inputContent })];
The error message from the above comment is different from the error message in this one (which is expected, as csvs are plain texts).
https://ai.google.dev/gemini-api/docs/prompting_with_media?lang=python
Will check it out - I wonder if it's not forcing the tool call. What if you prompt more strongly?
I'm investigating, but I wanted to point a few things out while I do:
Oh yes, sorry about that 😂 it was late last night. I mostly just wanted to point out that the csv is interpreted as text by it's file format definition.
I tried with a regular text file as well and resulted in the same issue.
I'm actually pretty surprised that vertex AI doesn't support it - perhaps it just interprets as text/plain instead?
I can't duplicate the problem.
It looks like Vertex AI is ok with "text/csv", so the documentation seems to be wrong (big shock).
My test code:
const zodChatMessage = z.object({
chatMessage: z.string().optional().describe("the chat message answer responding to the user's question"),
chatCitation: z.string().optional().describe('1-3 key sentences from the text segment justifying the extracted chat message'),
chatPage: z.number().optional().describe('the page number where this answer was extracted from')
});
const model = new ChatVertexAI({
temperature: 0.7,
model: 'gemini-1.5-pro',
}).withStructuredOutput(zodChatMessage);
const filePath = "src/tests/data/structuredOutput.csv";
const fileString = loadFile(filePath);
const inputContent: MessageContent = [{ type: 'text', text: 'What is the price of a shrimp cocktail?' }];
inputContent.push({ type: 'image_url', image_url: `data:text/csv;base64,${fileString}` });
const input = [new HumanMessage({ content: inputContent })];
const response = await model.invoke(input);
console.log(response);
Gives a response:
{ chatMessage: 'The price of the Shrimp Cocktail is $13.50.' }
When I run the test using the raw JSON it gives me:
{
"candidates": [
{
"content": {
"role": "model",
"parts": [
{
"text": "The price of the Shrimp Cocktail is $13.50. \n"
},
{
"functionCall": {
"name": "extract",
"args": {
"chatMessage": "The price of the Shrimp Cocktail is $13.50."
}
}
}
]
},
"finishReason": "STOP",
"safetyRatings": [
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"probability": "NEGLIGIBLE",
"probabilityScore": 0.111627996,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severityScore": 0.1362632
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"probability": "NEGLIGIBLE",
"probabilityScore": 0.34620965,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severityScore": 0.1468197
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"probability": "NEGLIGIBLE",
"probabilityScore": 0.24526574,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severityScore": 0.08787644
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"probability": "NEGLIGIBLE",
"probabilityScore": 0.17009404,
"severity": "HARM_SEVERITY_NEGLIGIBLE",
"severityScore": 0.07356305
}
]
}
],
"usageMetadata": {
"promptTokenCount": 421,
"candidatesTokenCount": 33,
"totalTokenCount": 454
}
}
which seems correct.
The only thing I can guess about why my prompt is working and yours isn't is that by the time Gemini gets the tokens, it doesn't see a "file" anywhere. It just sees more tokens. So a prompt that asks about a "file" doesn't make any sense to it unless there is something that explicitly says "here is a file" or something.
The CSV file is a pretty straightforward CSV file that contains a menu item, description, and price.
The loadFile
function you see there just loads the file and turns it into Base 64.
Okay I am using a different computer from the one last night, however it seems like the output is still quite inconsistent. When attaching a .csv, I will sometimes get the proper output structure, whereas other times it will not be correct (i.e. null vs. a number or optional), resulting in a error. For example:
OutputParserException [Error]: Failed to parse. Text: "{
"chatCitation": "The length of the lease, in months",
"chatMessage": "63 months",
"chatPage": null
}". Error: [{"code":"invalid_type","expected":"number","received":"null","path":["chatPage"],"message":"Expected number, received null"}]
It seems to like to put "null" as the response for optional inputs, however if you use zod's .nullable()
, you will get an error. This is an issue with or without the file attachment.
Lastly, it looks like you can still result in a no tools_call in message
occasionally if you play around with the input prompt (i.e. ask it something irrelevant to the attached document), but this is much less common.
I think the only way to avoid any possibility of "no tools_call" is to add a "fallback" tool and to force it to reply with a tool using the ANY function calling mode (which we haven't implemented support for yet - see https://github.com/langchain-ai/langchainjs/issues/5072).
Gemini ignoring the type seems odd, however. I haven't encountered that before.
I'm not sure there is anything that LangChain.js can do differently aside from implementing #5072, but I'm open to suggestions.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
I am trying to integrate ChatVertexAI with withStructuredOutput. This works fine without withStructuredOutput.
System Info