Closed jstjoe closed 1 month ago
Agreed onMessage
would be awesome!
You can now access intermediate steps with the steps
property in onFinish
. Keeping this open since I want to implement onStepFinish
as well.
You can now access intermediate steps with the
steps
property inonFinish
. Keeping this open since I want to implementonStepFinish
as well.
Wow awesome. Will use steps in the meantime.
@lgrammel that's awesome, but which version? I'm not seeing it on v3.3.
I'm logging the event passed in the onFinish(event)
callback and not getting steps
anywhere in the payload that I can see. Sorry if I'm missing something obvious!
onFinish(event) {
console.log('TT - chat api - onFinish event: ', event);
...
}
I get an object with finishReason, usage, text, toolCalls (empty), toolResults (empty), rawResponse, response, warnings, experimental_providerMetadata. Thanks for your help!
I did a little digging and I do see references to 'steps' for useChat
, and for generateText
, but not for 'streamText'.
I'm just really hoping that I don't have to tap into the stream of chunks to access the intermediate messages on the server but that's all I can see right now.
It would be absolutely killer to have access to the full array of Messages on the server side, with callbacks/promises for new Messages. While onFinish()
is very helpful, even without the intermediate steps challenge it is still odd to me that I can't just get the actual Message object on the server. I have to reconstruct my own Message to persist it in the same format that's used throughout the rest of the stack. You can sort of see what I mean in this documented example:
const result = await streamText({
model: openai('gpt-4-turbo'),
messages: convertToCoreMessages(messages),
async onFinish({ text, toolCalls, toolResults, usage, finishReason }) {
// implement your own storage logic:
await saveChat({ text, toolCalls, toolResults });
},
});
In this example, assuming I want the conversation stored in the same format it is used, my custom saveChat()
function would need to construct a Message from the text, toolCalls, and toolResults. OR I can save it however I want, but then I have to convert it back to a Message before using it to restore the conversation state.
I really want to just persist the conversation state without doing any conversions along the way. That would be so incredibly useful to me, and I imagine to others as well. LangChain offers something like this but they go too far and want me to use their adapters straight to the DB/persistence layer. I think vercel/ai has a chance to really set the bar here 🙏
@lgrammel I'm on ai@3.3.37
and not seeing steps in the object passed as a parameter to onFinish()
for streamText()
. I updated just now to be sure. I get the same behavior reported above.
I just double-checked, steps
is available. Can you use this example as a template: https://github.com/vercel/ai/blob/main/examples/ai-core/src/stream-text/openai-on-finish-steps.ts
Apologies @lgrammel you are right. I resolved my issue and am able to access the steps
property. It's very useful and appreciated, but unfortunately I've been working on it for my purposes and it's still rather challenging to convert the steps back into either an array of Messages
for useChat
or CoreMessages
for streamText
. Probably a skill issue but I really struggled and resorted to a front-end onFinish
callback to useChat
, with the messages available in a reactive variable in Sveltekit.
Messages seem like the best option as a payload for persistence to me, since it's what the front end expects and is returned through the stream. Also because with an array of Messages
, with the toolInvocations
, I can use convertToCoreMessages()
to get CoreMessages
as needed. It's proven the most useful for me.
The feature wouldn't need to be another callback. Any way to get Messages
in onFinish
or onStep
, as you mentioned, would work for me. But the type is an important part of this request to me.
I really appreciate you taking the time to look at and consider this, and for the suggestions and addition of steps
. Thank you.
onStepFinish
was added in ai@3.3.38
@jstjoe I'll probably add a way to get the additional CoreMessage
objects easily in onFinish
. Getting to Message
is neither possible (because of missing information) nor desired (because we would bleed AI SDK UI into AI SDK Core).
@jstjoe I'll probably add a way to get the additional
CoreMessage
objects easily inonFinish
. Getting toMessage
is neither possible (because of missing information) nor desired (because we would bleed AI SDK UI into AI SDK Core).
This would be great! I would use this as well.
@jstjoe This is how I currently map the toolResults inside of the initialMessages if this helps:
toolInvocations: message.toolResults?.map((toolResult) => ({
state: 'result',
args: toolResult.args,
toolCallId: toolResult.toolCallId,
toolName: toolResult.toolName,
result: toolResult.result,
})),
ai@3.3.40
has responseMessages
for streamText
: https://sdk.vercel.ai/docs/ai-sdk-core/tools-and-tool-calling#response-messages
@lgrammel Just switched to ai and love this addition! Is there a way to the tool calls when they are happening? I'd like to update the client because some of the tool calls I have take a bit, so now it's a bit wtf experience for the user.
I'm happy with setInterval 10ms hammering an internal array if that's the fastest way to get this done ;)
edit
Just found onChunk
, problem solved 🥳
@mjp0 please also check out tool call streaming: https://sdk.vercel.ai/docs/ai-sdk-ui/chatbot-with-tool-calling#tool-call-streaming
Feature Description
I've searched the documentation and issues and can't find a solution to my challenge. I'm using the Vercel AI SDK extensively and find it incredibly useful, but have been struggling to persist my chats server-side with tool calls.
Is there a way I can use promises or some other callback on my API route to persist every one of the messages that I'm returning up to the client? If not, that's my feature request. Something like
onMessage
which is invoked each time amessage
is returned to the client.Use Case
I've found that
onFinish
only supplies the last message when using the built intools
option withstreamText()
. I'm following a pattern much like documented example of persisting chat history: https://sdk.vercel.ai/docs/ai-sdk-ui/storing-messages#implementing-persistent-chat-historyBut in practice this means that the LLM's first response, with the tool calls, is not being saved. For the user this means that when they are first having the conversation they'll see the intermediate tool call messages (which sometimes include a text message from the agent) but if they load the saved conversation, those messages are missing.
Additional context
The only work around I can think of is to completely move the logic up to the client and introduce an extra client->server call to save each message from my client side code. That feels incredibly inefficient and clunky :/