Closed athrael-soju closed 3 months ago
@jeremyphilemon @jaredpalmer
the messages sent to openai are not formatted properly.
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
I was testing. This works.
const toolCallMessage: CoreMessage = {
role: "assistant",
content: [
{
type: "tool-call",
toolName: "showStockPurchase",
toolCallId: "8Bb6oJ1vAIRuHSIAVYmAp",
args: z.object({
symbol: z.string(),
price: z.number(),
defaultAmount: z.number(),
}),
},
],
};
const toolMessage: CoreMessage = {
role: "tool",
content: [
{
result: JSON.stringify({
symbol: "AAPL",
price: 150,
defaultAmount: 100,
status: "completed",
}),
type: "tool-result",
toolCallId: "8Bb6oJ1vAIRuHSIAVYmAp",
toolName: "showStockPurchase",
},
],
};
const all = [...messages, toolCallMessage, toolMessage];
@prashantbhudwal Could you raise a PR please?
@jeremyphilemon @jaredpalmer
the messages sent to openai are not formatted properly.
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
I was testing. This works.
const toolCallMessage: CoreMessage = { role: "assistant", content: [ { type: "tool-call", toolName: "showStockPurchase", toolCallId: "8Bb6oJ1vAIRuHSIAVYmAp", args: z.object({ symbol: z.string(), price: z.number(), defaultAmount: z.number(), }), }, ], }; const toolMessage: CoreMessage = { role: "tool", content: [ { result: JSON.stringify({ symbol: "AAPL", price: 150, defaultAmount: 100, status: "completed", }), type: "tool-result", toolCallId: "8Bb6oJ1vAIRuHSIAVYmAp", toolName: "showStockPurchase", }, ], }; const all = [...messages, toolCallMessage, toolMessage];
@athrael-soju This is not a solution, just a problem description. I am still learning this. So, I don't think I am the best person for the PR.
Same error. Any solution?
Same kind of error here with this error message: Cannot destructure property 'role' of '.for' as it is undefined. at convertToOpenAIChatMessage
Can't have the app reply after showing me a card.
Time for me to study some docs 📃 and try some implementations !! GLHF
UPDATE:
I've tried to check out to the previous commit and the bug disapears:
Because the new SDK uses streamUI https://github.com/vercel/ai-chatbot/blob/095550d4dca22dc506cdbda815cab94cfe8fbe74/lib/chat/actions.tsx#L140
And old one uses a render https://github.com/vercel/ai-chatbot/blob/d5f736128dca6efff963fa3705f728b06f8d7927/lib/chat/actions.tsx#L144
Because the new SDK uses streamUI
And old one uses a render
The bug appears when using streamUI, not render. If you revert to the previous commit it works fine with render.
I have the same error after clicking on the first generated component, I update everything even nextjs but it didn't work
Yes, this is an issue with the new code. I guess it should be reverted or fixed? I tried a few things to resolve it, but wasn't able to.
The latest update "streamUI instead of render #324" does not allow messages with "role: system" between conversation (only at start) with new prompt param system.
The "problem" is: The functions also uses messages with "role: system" to register the actions or events from the user in the context conversation.
The error Cannot destructure property 'role' error
is consequence of the function below that return undefined for "role: system"
This function is inside of dependency "ai": "^3.1.1"
// core/prompt/convert-to-language-model-prompt.ts
function convertToLanguageModelPrompt(prompt) {
const languageModelMessages = [];
if (prompt.system != null) {
languageModelMessages.push({ role: "system", content: prompt.system });
}
switch (prompt.type) {
case "prompt": {
languageModelMessages.push({
role: "user",
content: [{ type: "text", text: prompt.prompt }]
});
break;
}
case "messages": {
languageModelMessages.push(
...prompt.messages.map((message) => {
switch (message.role) {
case "user": {
if (typeof message.content === "string") {
return {
role: "user",
content: [{ type: "text", text: message.content }]
};
}
return {
role: "user",
content: message.content.map(
(part) => {
var _a;
switch (part.type) {
case "text": {
return part;
}
case "image": {
if (part.image instanceof URL) {
return {
type: "image",
image: part.image,
mimeType: part.mimeType
};
}
const imageUint8 = convertDataContentToUint8Array(
part.image
);
return {
type: "image",
image: imageUint8,
mimeType: (_a = part.mimeType) != null ? _a : detectImageMimeType(imageUint8)
};
}
}
}
)
};
}
case "assistant": {
if (typeof message.content === "string") {
return {
role: "assistant",
content: [{ type: "text", text: message.content }]
};
}
return { role: "assistant", content: message.content };
}
case "tool": {
return message;
}
}
})
);
break;
}
default: {
const _exhaustiveCheck = prompt;
throw new Error(`Unsupported prompt type: ${_exhaustiveCheck}`);
}
}
return languageModelMessages;
}
I tested in compiled module inside node_modules/ai/rsc/dist/rsc-server.msj
function convertToLanguageModelPrompt by adding
case "system": {
return message;
}
Result:
This works but the correction must be made in the repository of the dependency "ai": "^3.1.1".
And this depends on whether it is considered good practice to send system messages inside of the conversation to save new context (user events)
The above didn't solve my problem. There is a phantom object that is undefined
on my end. And that's causing the error
Obviously a little sanity check addresses this but I'm curious where and why an operation returns undefined
in the first place and fix it there instead.
@kevb10 you can track your messages in the function streamUI locate in node_modules/ai/rsc/dist/rsc-server.mjs
Check messages before and after
And you can do testing with the function convertToLanguageModelPrompt(validatedPrompt)
to check what message return undefined
P.S. remember that you are in the dependency
Thanks @kevb10. It didn't solve my problem either, but it's nice to see progress on this issue
Update: I implemented your sanity check and it works for me.
@kevb10's sanity check also works for me, using patch-package to create a patch file instead of making edits inside of node_modules
(this way, I won't lose my changes as soon as I run npm update
or npm install
). Still not an ideal solution, obviously, but good enough for the time being.
Just ran
npm install patch-package postinstall-postinstall --save-dev
Followed by:
npx patch-package @ai-sdk/openai
And the issue went away. Still interested in a more permanent solution, though.
Can anyone please explain why https://chat.vercel.ai/ works perfectly fine, but the main branch on localhost does not?
Isn't the main branch from the repo deployed to this domain?
Maybe because that might point to some old commit.
Just noting that I'm also running into this bug and have been troubleshooting it a bit today. This affects any component that makes use of function or system role messages and happens after an unsupported message role type has been inserted into the chat history (which creates an undefined chat message).
What I've found mirrors what some of the other commenters have noted: adjusting the node_modules/ai/rsc/dist/rsc-server.msj file's convertToLanguageModelPrompt() function to handle more of the message types defined in actions.tsx (specifically 'function' and 'system') seems to eliminate the error, although it's not a good solution since it's a workaround in the dependency and I'm not clear what other effects that change might be having.
Something in the RSC framework is choking on handling message types of 'function', 'tool', 'data', and in some cases 'system' and it's causing them to become undefined messages in the chat history array, which is creating downstream problems elsewhere in the app.
I'm also not clear on whether those roles are intended to be functional in the RSC or with GPT-4 and just aren't working here yet, or if the app is trying to do something it shouldn't be and using the roles incorrectly. The message role functionality is still very new to me!
Replacing all function roles to assistant works. And now ai SDK version ai@3.1.5 has support for system message https://github.com/vercel/ai/blob/main/packages/core/core/prompt/message.ts#L12
Honestly, with all the bugs combined, you're better off using the vercel-ai-src example
Thanks everyone for trying to debug the issue and I appreciate everyone's patience! Like you all had suspected, the messages
property didn't follow the spec so the migration from render
to streamUI
wasn't trivial and caused errors.
@jeremyphilemon although https://github.com/vercel/ai-chatbot/pull/324 resolves the majority of issues from using streamUI over render, this one still remains.
Simply get the trending memecoins to show and select one of them. Error attached.