Closed logan-anderson closed 6 months ago
@logan-anderson How are you. I haven't tested it yet, but from the example in the documentation data.append is out of stream, correct? Putting it away doesn't solve your problem but it helps you better understand the flow
I was also looking into making this work but I think it doesn't work that way yet. See this issue, starting with this comment: https://github.com/vercel/ai/pull/425#issuecomment-1682841115
I think it is because of the data
stream together with LLM Response. But I would love also to see if it is possible to stream the data
first before LLM. Maybe you could look at how streamData
works under the hood from my open issue
https://github.com/vercel/ai/issues/751
@logan-anderson concur 100% that I expected it (and need it) to be real time. It seems like with append
the provided value is pushed into the internal buffer (this.data). However, this action alone doesn't cause the data to be immediately processed or sent through the stream.
Workaround to getting real time messages is to not use the stream data -- rather use a PubSub service and have the client subscribe to the Chat ID and the chat API handler will send messages to the Chat ID.
@IdoPesok Have you managed to get experimental_StreamData working or went for some other PubSub service?
In a time of agents and agent tools, this is absolutely crucial.
Not being able keep the user informed about what the agent is doing in the 10-15 seconds it might spend on invoking different tools, almost renders the data stream useless.
I know this feature is experimental, but we really cannot see an LLM future without some sort of data stream, and it needs to work as soon as the background operations starts.
A bit upvote from us.
Fix in 3.1.11 https://github.com/vercel/ai/releases/tag/ai%403.1.11
Description
I was using the chat HN and wanted to add the
experimental_StreamData
feature so I followed the docs the docs.The issue
I want to stream to the frontend info about what the backend is doing (ie searching hacker news) but the data is not streamed until the LLM starts responding. I would expect when I call
data.append
that the data is streamed right away.How to reproduce
data
does not get streamed to the frontend until the LLM starts responding. I would expect thatdata.append
gets streamed when it is called.See video demo for more info: https://www.loom.com/share/c98313137f174638a1d1decd400778c0?sid=a5479f18-2739-4440-b0ef-25303cb5bfc9
Code example
Github Repo: https://github.com/logan-anderson/experimental_StreamData-vercel-ai-issue
Relevant code block.
Additional context
No response