vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.86k stars 1.46k forks source link

Modify the AI Message & properties being sent to the UI via data stream #3357

Open Godrules500 opened 3 days ago

Godrules500 commented 3 days ago

Feature Description

Within RSC I had full control over the state, so I could modify what was being sent back to the UI. What I'm hoping for, if it doesn't exists, that I can modify the message properties when the results are streamed back.

An example would be error handling. If an error is thrown by the LLM, or if it hits a content filter tagged as unsafe, I need the ability to override the message going back to the user. I also need to set an errorFlag so that once it gets back to the client the object has the updated properties and message content.

Using things like data, or message annotations (which currently isn't working for me) is another point of failure that I have to manage. So I'd like the ability to override what gets sent back to the client in the onFinish callback of streamText.

Use Case

Additional context

No response

lgrammel commented 2 days ago

Thank you! Please also see https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot#error-messages regarding overriding / controlling error messages that are sent from the server.

Godrules500 commented 2 days ago

Thank you! Please also see https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot#error-messages regarding overriding / controlling error messages that are sent from the server.

Content filters (meaning safety violations from google-vertex Gemini) don't fall into the error category and don't show up until onfinish. So far I cannot get it to the getError callback.

Is there a way that we could, in the onFinish method, get full control over what goes back?

lgrammel commented 2 days ago

Only in an extension fashion, otherwise it would break the consumption in the client. This is the role that stream data message annotations are supposed to play.

Godrules500 commented 2 days ago

Only in an extension fashion, otherwise it would break the consumption in the client. This is the role that stream data message annotations are supposed to play.

I'll give it a shot, it just adds more points of failure.

With RSC, do you know if there is a timeline of expectation of if/when work may pick back up on that? I'm tempted to leave it with RSC because the ability to have full control over the state gives me the ability to streamline certain types of interactions.

lgrammel commented 2 days ago

No timeline since there are a few fundamental things that need sorting out, and meanwhile the LLM space is changing quickly (as always). My hope is in 6-12 months but there is high uncertainty.