vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.6k stars 1.41k forks source link

streamUI error handling #1349

Open steebchen opened 5 months ago

steebchen commented 5 months ago

Description

It looks like the render function doesn't support error handling. I tried wrapping the render function in a try/catch, but it doesn't catch any errors (specifically when an error is immediately returned, such as a context window length exceeded)/

Code example

I used this template: https://github.com/vercel/ai-chatbot

At this part:

return render({
  model: 'gpt-3.5-turbo',
  provider: openai,
  initial: <SpinnerMessage />,
  messages: [
    {
      role: 'system',
      content: `You are a helpful bot.`
    },
    ...aiState.get().messages.map((message: any) => ({
      role: message.role,
      content: message.content,
      name: message.name
    }))
  ]
})

This can result in this error being thrown

 ⨯ unhandledRejection: BadRequestError: 400 This model's maximum context length is 16385 tokens. However, your messages resulted in 21253 tokens. Please reduce the length of the messages.

and the UI just keeps showing this:

image

Additional context

No response

parthematics commented 5 months ago

Also experiencing this issue - would love some guidance here!

nateaxcell commented 5 months ago

Same, its obvious that the openAI library is throwing an error but there's no consistent way to catch it using this library it seems.

ghaemisr commented 5 months ago

Same here! Couldn't find any documentation on how this should be handled.

gediminastub commented 3 days ago

Same here, still can't find how to do that, would highly appreciate guidance!