There are 2 issues with use useObject when using it in conjunction with streamObject on the next.js API layer.
If the LLM (like chat gpt 4o throw an error such as: "This model's maximum context length is 128000 tokens. However, your messages resulted in 154265 tokens (154107 in the messages, 158 in the functions). Please reduce the length of the messages or functions.", streamObject will throw this error but on the front-end we just receive an empty error object
In front-end example below, error becomes an empty object that doesn't keep this error so I don't what the BE error is.
Description
There are 2 issues with use
useObject
when using it in conjunction withstreamObject
on the next.js API layer."This model's maximum context length is 128000 tokens. However, your messages resulted in 154265 tokens (154107 in the messages, 158 in the functions). Please reduce the length of the messages or functions."
,streamObject
will throw this error but on the front-end we just receive an empty error objectIn front-end example below, error becomes an empty object that doesn't keep this error so I don't what the BE error is.
stop()
during an error state with code below, nothing happensBasically in error states this functionality is difficult to use and there are no examples on dealing with this
Code example
ai npm version 3.3.9. you can use api example from docs and generate an LLM error like going over max content tokens: https://sdk.vercel.ai/examples/next-pages/basics/streaming-object-generation
Additional context
No response