vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.56k stars 1.4k forks source link

AI State not updated during iterations #1532

Open fran-cadenas-fu opened 4 months ago

fran-cadenas-fu commented 4 months ago

Description

Hello,

I'm currently trying to use the new streamUI feature to create a chat that keeps asking questions on a certain topic and displays Kick response buttons to de user through tools.

I can't get it to work because the AIState keeps getting reseted to the one in the first iteration. I'm not sure that this is a problem with the functions memoization or something.

Code example

async function submitUserMessage(userInput: string): Promise<UIMessage> {
  "use server";

  console.log("submitUserMessage", userInput);
  const history = getMutableAIState<typeof AI>();

  const ui = await streamUI({
    model: openai("gpt-4-turbo"),
    system: systemPrompt,
    messages: history.get().messages as CoreMessage[],
    text: async ({ content, done }) => {
      // When it's the final content, mark the state as done and ready for the client to access.
      if (done) {
        history.done({
          ...history.get(),
          messages: [
            ...history.get().messages,
            {
              role: "assistant",
              content,
            },
          ],
        });

      return <ChatAssistantMessage content={content} />;
    },
    tools: {
      show_quick_reply_question: {
        description: "Shows the ui for the user to quick reply",
        parameters: z
          .object({
            question: z.string().describe("the question to show"),
            options: z.array(z.string()).describe("the options to show"),
          })
          .required(),
        generate: async function ({ question, options }) {
          console.log(
            "show_quick_reply_question messages",
            history.get().messages
          );

          history.done({
            ...history.get(),
            messages: [
              ...history.get().messages,
              {
                role: "function",
                name: "show_quick_reply_question",
                content: JSON.stringify({ question, options }),
              },
            ],
          });

          return [
            <ChatAssistantMessage content={question} className="mt-2" />,
            <div className="flex flex-col gap-2 mt-2">
              {options.map((option) => (
                <ChatQuickReply key={option} chatQuickReply={option} />
              ))}
            </div>,
          ];
        },
      },
    },
  });
}

Additional context

When the conversations goes through the "text" streaming generation the state seems to be updated sucessfully. But in the case of rendering the questions through the tool 'show_quick_reply_question' (that is what the context says so the assistant uses this tool), after pressing the Button options in the UI a new message is submitted and in that iteration the history that is accessed is not updated and it looks like the first iterations with only 1 initial message and the last ui interface message.

unstubbable commented 4 months ago

The current implementation of streamUI has a known limitation that tool calls overwrite each other and also the text UI (see https://github.com/vercel/ai/pull/1210 for an approach to fix this in the predecessor render). Therefore, you currently need to handle the composition of text and tool UI & state manually, and limit it to a single tool call per message via the system prompt.

Furthermore, make sure that you properly set the assistant content in generate, similar to https://sdk.vercel.ai/docs/ai-sdk-rsc/ai-ui-states#updating-ai-state-on-server, and also the tool content.

Here's a fully working example: https://github.com/unstubbable/mfng-ai-demo/blob/db32ff4c80fc831d62bf07c7603b02e7b9e09891/src/app/submit-user-message.tsx#L86-L119

Disclaimer: This is not a general recommendation of how streamUI is supposed to be used, I'm just sharing the solution that worked for me with the current state of the AI SDK.

gclark-eightfold commented 3 months ago

I'm curious if you know what causes this limitation.

I've been playing around with streamUI but recently been looking at switching to streamText to see if that fixes this issue. (Also the onFinish callback would be nice to have by switching to streamText)

I suspect it has more to do with the aiState and uiState though so I'm not sure switching to streamText will fix any of these issues.