vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.46k stars 1.39k forks source link

Adding an `onChunk` callback (or similar?) for `streamObject` #2887

Open holdenmatt opened 2 weeks ago

holdenmatt commented 2 weeks ago

Feature Description

I'm using streamObject in a route hander, similar to the docs example here:

export async function POST(req: Request) {
  const context = await req.json();

  const result = await streamObject({
    model: openai('gpt-4-turbo'),
    schema: notificationSchema,
    prompt:
      `Generate 3 notifications for a messages app in this context:` + context,
  });

  return result.toTextStreamResponse();
}

Is there a method (or workaround) for getting a callback when the first chunk is sent?

I see that streamText has an onChunk callback, but streamObject doesn't, is that intentional?

Use Case

I'm trying to log LLM latency in a nextjs app, and want to measure "time-to-first-byte". I haven't found any good way to detect when the first chunk is sent.

Additional context

No response

lgrammel commented 2 weeks ago

An onChunk callback would be misleading, since we are not streaming text. I'll look into other ways to get you this information, e.g. for streamText it's automatically part of our OTel information.

holdenmatt commented 2 weeks ago

I see. Sure, any other approach that triggers on ~first byte would also work for me. Eg AIStream has an onStart handler. I'm using Posthog (not OTel), but if there's some way to plug in to the OTel integration just to get a 'start' event, that would also work fine. Thanks!