vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.29k stars 1.34k forks source link

Timeout error while using `Stream Text generation` from ai-sdk (Free plan) #1636

Closed reveurguy closed 3 weeks ago

reveurguy commented 3 months ago

Description

While using the Stream Text generation from ai-sdk, the function call is getting timed out. The generation starts and runs for a bit, after that i receive this error and the generation stops.

Screenshot 2024-05-17 at 8 41 54 PM

This is the log from vercel.

image

This is the error console log in production.

Code example

This is the code in /layout.tsx file:

export const dynamic = 'force-dynamic'; export const maxDuration = 60;

This is the code in /page.tsx file:

  async function output() {
    const { output } = await generate(prompt, gpt3Configurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt3Output((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt3Time(time);
  }

  async function output4() {
    const { output } = await generate4(prompt, gpt4Configurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt4Output((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt4Time(time);
  }

  async function output4o() {
    const { output } = await generate4o(prompt, gpt4oConfigurations);
    const startTime = Date.now();
    let endTime = 0;
    for await (const delta of readStreamableValue(output)) {
      setGpt4oOutput((currentGeneration) => `${currentGeneration}${delta}`);
      endTime = Date.now();
    }
    const time = endTime - startTime;
    setGpt4oTime(time);
  }

  const handleRun = (e: any) => {
    if (prompt) {
      e.preventDefault();
      Promise.all([
        Promise.resolve(output()),
        Promise.resolve(output4()),
        Promise.resolve(output4o()),
      ]).catch((error) => {
        console.error('An error occurred while running the functions:', error);
        toast.error('An error occurred while running the functions');
      });
    }
  };

This is the code for the generate, generate4, generate4o functions in the /action.ts file:

'use server';

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { createStreamableValue } from 'ai/rsc';

type Config = {
  maxTokens: number;
  temperature: number;
  topP: number;
  presencePenalty: number;
  frequencyPenalty: number;
};

export async function generate(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-3.5-turbo'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

export async function generate4(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-4'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

export async function generate4o(input: string, config: Config) {
  'use server';
  const stream = createStreamableValue('');
  (async () => {
    const { textStream } = await streamText({
      model: openai('gpt-4o'),
      prompt: input,
      maxTokens: config.maxTokens,
      temperature: config.temperature,
      topP: config.topP,
      presencePenalty: config.presencePenalty,
      frequencyPenalty: config.frequencyPenalty,
    });
    for await (const delta of textStream) {
      stream.update(delta);
    }
    stream.done();
  })();

  return { output: stream.value };
}

The handleRun() function is called on click of submit button.

Additional context

No response

admineral commented 3 months ago

Maybe this helps https://vercel.com/guides/streaming-from-llm

ElectricCodeGuy commented 3 months ago

export const maxDuration = 60; should be placed in the page.tsx file

jeremyphilemon commented 3 months ago

@ElectricCodeGuy is right, the server actions inherit maxDuration set in the page it's called from, so I would move it into page.tsx

I was able to reproduce the error and setting the max duration in page.tsx fixed it!

ElectricCodeGuy commented 3 months ago

I created a small example project on how you could implement the new ai/rsc :) https://github.com/ElectricCodeGuy/SupabaseAuthWithSSR