justinmahar / openai-ext

🤖 Extension to OpenAI's API to support streaming chat completions.
https://justinmahar.github.io/openai-ext/
MIT License
49 stars 3 forks source link

Streams on localhost, but not on Netlify? #4

Open dosstx opened 1 year ago

dosstx commented 1 year ago

Nice package. I am using this in a Nuxt app. Works good, except....for some reason there is a major difference when running it in a localhost vs production server:

Difference:

Running it locally, the data is streamed and the text messages appear incrementally on the UI. However, running the same code on a production server (Netlify) , the data just appears all at once, instead of incrementally.

Is it the way I am hosting the app on Netlify? Perhaps I am not deploying the app to Netlify Edge vs standard Netlify? https://nitro.unjs.io/deploy/providers/netlify#netlify

dosstx commented 1 year ago

It looks like it does indeed need edge functions to work, but I can't get Netlify Edge functions to work right now. I'll try on Vercel.

dosstx commented 1 year ago

OK, so I tried on vercel and although it deploys to Vercel edge (uses edge functions) , I get the following server error:

message
: 
"Cannot read properties of undefined (reading 'pipe')"
stack
: 
""
statusCode
: 
500
statusMessage
: 
""
url
: 
"/api/chat"

It works FINE on localhost, just not when deployed to Vercel (or Netlify). Here's my code:

import { Configuration, OpenAIApi } from 'openai'
import { OpenAIExt } from 'openai-ext'

const config = useRuntimeConfig()

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY || config.OPENAI_API_KEY
})
const openai = new OpenAIApi(configuration)

const systemPrompts = [...]

// Configure the stream (use type ServerStreamChatCompletionConfig for TypeScript users)
const streamConfig = {
  openai: openai,
  handler: {
    // Content contains the string draft, which may be partial. When isFinal is true, the completion is done.
    onContent(content, isFinal, stream) {
      console.log(content, "isFinal?", isFinal);
    },
    onDone(stream) {
      console.log('Done!');
    },
    onError(error, stream) {
      console.error(error);
    },
  },
};

const axiosConfig = {
  timeout: 15000
};

export const getChatStream = async ({ messages }) => {
  try {
    const response = await OpenAIExt.streamServerChatCompletion(
      {
        model: 'gpt-3.5-turbo',
        messages: [
          ...systemPrompts, ...messages
        ],
      },
      streamConfig,
      axiosConfig
    );

    return response.data;
  } catch (error) {
    console.error(1, error);
    //TODO: Display error message on UI
    // For example, you can use a library like Toastify to display a toast message
    // toast.error('Request failed. Please try again later.');
  }

};

If anyone has any ideas let me know. Until then, I'll not be able to stream on production servers.

brianfoody commented 1 year ago

I am having a strange problem with this package also where it works great on localhost but when deployed to vercel the onContent is called the next time the function is invoked. I'm baffled right now

brianfoody commented 1 year ago

I switched to this @dosstx and solved my problem - https://github.com/SpellcraftAI/openai-streams

import { OpenAI } from "openai-streams/node";

// ...

const stream = await OpenAI("chat", {
  model: "gpt-3.5-turbo",
  messages: [
    systemMessage()
    userMessage,
  ],
  temperature: props.bot.temperature,
  top_p: props.bot.top_p,
  frequency_penalty: props.bot.frequency_penalty,
  presence_penalty: props.bot.presence_penalty,
  max_tokens: 1000,
  n: 1,
});

let response = "";
for await (const chunk of stream) {
  // convert chunk to a string if necessary
  const content = Buffer.from(chunk).toString("utf8");

  response += content;

  // You can do something with it here
}
dosstx commented 1 year ago

Thank you, I haven't tried that yet, but I did fix my original pipe problem. Now, for some reason I get another error I am trying to track down:

{
    "url": "/api/chat",
    "statusCode": 404,
    "statusMessage": "Cannot find any route matching /api/chat.",
    "message": "Cannot find any route matching /api/chat.",
    "stack": ""
}

Again, code works locally....not when deployed.