vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
10.22k stars 1.54k forks source link

`experimental_StreamData` is not streaming data in realtime #853

Closed logan-anderson closed 6 months ago

logan-anderson commented 11 months ago

Description

I was using the chat HN and wanted to add the experimental_StreamData feature so I followed the docs the docs.

The issue

I want to stream to the frontend info about what the backend is doing (ie searching hacker news) but the data is not streamed until the LLM starts responding. I would expect when I call data.append that the data is streamed right away.

How to reproduce

  1. clone and setup the demo repo
  2. run yarn dev
  3. Go to localhost:3000, ask it about hacker news, and see that data does not get streamed to the frontend until the LLM starts responding. I would expect that data.append gets streamed when it is called.

See video demo for more info: https://www.loom.com/share/c98313137f174638a1d1decd400778c0?sid=a5479f18-2739-4440-b0ef-25303cb5bfc9

Code example

Github Repo: https://github.com/logan-anderson/experimental_StreamData-vercel-ai-issue

Relevant code block.

 const data = new experimental_StreamData();
  const stream = OpenAIStream(initialResponse, {
    onFinal: () => {
      data.close();
    },
    experimental_streamData: true,
    experimental_onFunctionCall: async (
      { name, arguments: args },
      createFunctionCallMessages,
    ) => {
      console.log("appending Data");
      // The data should be streamed when`data.append` is called (before `runFunction`)
      data.append({ message: "Searching Hacker News..." });
      const result = await runFunction(name, args);
      const newMessages = createFunctionCallMessages(result);
      data.append({ message: "Done Searching Hacker news" });  
      //  data is not streamed until LLM starts streaming data (Issue)
      return openai.chat.completions.create({
        model: "gpt-3.5-turbo-1106",
        stream: true,
        messages: [...messages, ...newMessages],
      });
    },
  });

Additional context

No response

tgonzales commented 11 months ago

@logan-anderson How are you. I haven't tested it yet, but from the example in the documentation data.append is out of stream, correct? Putting it away doesn't solve your problem but it helps you better understand the flow

rafalzawadzki commented 11 months ago

I was also looking into making this work but I think it doesn't work that way yet. See this issue, starting with this comment: https://github.com/vercel/ai/pull/425#issuecomment-1682841115

nabilfatih commented 11 months ago

I think it is because of the data stream together with LLM Response. But I would love also to see if it is possible to stream the data first before LLM. Maybe you could look at how streamDataworks under the hood from my open issue https://github.com/vercel/ai/issues/751

IdoPesok commented 11 months ago

@logan-anderson concur 100% that I expected it (and need it) to be real time. It seems like with append the provided value is pushed into the internal buffer (this.data). However, this action alone doesn't cause the data to be immediately processed or sent through the stream.

IdoPesok commented 11 months ago

Workaround to getting real time messages is to not use the stream data -- rather use a PubSub service and have the client subscribe to the Chat ID and the chat API handler will send messages to the Chat ID.

mlewandowskim commented 8 months ago

@IdoPesok Have you managed to get experimental_StreamData working or went for some other PubSub service?

proemian commented 7 months ago

In a time of agents and agent tools, this is absolutely crucial.

Not being able keep the user informed about what the agent is doing in the 10-15 seconds it might spend on invoking different tools, almost renders the data stream useless.

I know this feature is experimental, but we really cannot see an LLM future without some sort of data stream, and it needs to work as soon as the background operations starts.

A bit upvote from us.

lgrammel commented 6 months ago

Fix in 3.1.11 https://github.com/vercel/ai/releases/tag/ai%403.1.11