Open vishalsaugat opened 1 month ago
Afaik OpenAI does not offer Llama 3.1. The error you see comes from a specific provider. Which provider are you using?
I am using the steps provided on vercel sdk page
'use server';
import { streamUI, tool } from 'ai/rsc';
import { createOpenAI as createGroq } from '@ai-sdk/openai';
import { z } from 'zod';
const groq = createGroq({
baseURL: 'https://api.groq.com/openai/v1',
apiKey: process.env.GROQ_API_KEY,
});
export async function streamComponent() {
const result = await streamUI({
model: groq('llama-3.1-70b-versatile'),
prompt: 'Get the weather for San Francisco',
text: ({ content }) => <div>{content}</div>,
tools: {
getWeather: tool({
description: 'Get the weather for a location',
parameters: z.object({ location: z.string() }),
generate: async function* ({ location }) {
yield <div>loading...</div>;
const weather = '25c'; // await getWeather(location);
return (
<div>
the weather in {location} is {weather}.
</div>
);
},
}),
},
});
return result.value;
}
This should work according to documentation but it doesnt because of the toolChoice parameter Reference - https://sdk.vercel.ai/docs/guides/llama-3_1
@vishalsaugat can you try removing the tool function call, see https://github.com/vercel/ai/pull/2513
@lgrammel Yes I have tried that too, dint work.
Essentially, there should be a value for toolChoice like toolChoice: 'donotset'
which dont pass tool_choice parameter while calling the model API. Currently it passes either 'auto' or 'none' or function as object
@vishalsaugat I believe this might be caused by some changes on the groq side. I've reached out to them. Can you try the 8b model instead?
Hey @vishalsaugat - would you be able to upload your repository so I can take a look? I've just run all of the code snippets from the guide again (including the streamUI
one mentioned) and it's working as expected.
Ok, so I guess its because groq is handling this on its own side and not sending tool_choice: auto
further to llama 3.1 . I am using llama 3.1 via azure and seems like its not omitting tool_choice.
This is my code
result = await streamUI({
model: getModelFunction(model),
initial: <SpinnerMessage model={model} />,
maxTokens: getMaxTokens(model),
system: getPrompt(systemPrompt),
messages: [
...aiState.get().messages.map((message: any) => ({
role: message.role,
content: message.content,
name: message.name,
model: message.model
}))
],
text: ({ content, done, delta }) => {
if (!textStream) {
textStream = createStreamableValue('')
textNode = <BotMessage model={model} content={textStream.value} />
}
if (done) {
textStream.done()
aiState.done({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'assistant',
content,
model
}
]
})
} else {
textStream.update(delta)
}
return textNode
},
tools: {
generateImage: {
description: 'Generate an image based on the user message.',
parameters: z.object({
text: z.string().describe('A detailed description of the image to be generated.')
}),
generate: async function* ({ text }: { text: string }) {
yield (
<BotCard model={model}>
{spinner}
</BotCard>
)
await sleep(1000)
const toolCallId = nanoid()
let result: ImageResult[] = await generateImage({
prompt: text,
});
aiState.done({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'assistant',
content: [
{
type: 'tool-call',
toolName: 'generateImage',
toolCallId,
args: { text }
}
],
model
},
{
id: nanoid(),
role: 'tool',
content: [
{
type: 'tool-result',
toolName: 'generateImage',
toolCallId,
result
}
],
model
}
]
})
return (
<BotMessageImage result={result} model={model} />
)
},
}
}
})
} catch (error: any) {
try {
result = {
value: error.responseBody ? JSON.parse(error.responseBody).error.message : error.message
}
} catch (err: any) {
result = {
value: (<h1>{error.message}</h1>)
}
}
}
return {
id: nanoid(),
display: result.value
}
}
This code works if I also pass toolChoice 'none'
but when I pass toolChoice 'auto'
or dont pass toolChoice at all, it fails and give error which I pasted in my first message
@lgrammel It will work with Llama 8b. Is it possible to add the support for above mentioned ?
UPDATE: same error with llama8b also
Description
I am using llama3.1 model using
@ai-sdk/openai
package. I get this errorwhen using 'toolchoice': 'auto' in streamUI or not set it at all. On reading further I found out that llama doesnt support "tool_choice": "auto" parameter in its API. If its set to null it automatically gets converted to auto. https://github.com/vercel/ai/blob/f7a94535f1d8b8a6f4179d7f5cd762389ef6de4b/packages/core/core/prompt/prepare-tools-and-tool-choice.ts#L37C21-L37C25
Solution: - support null as parameter which doesnt send
tool-choice
to Llama or some other models APICode example
Additional context
No response