vercel / ai

Build AI-powered applications with React, Svelte, Vue, and Solid
https://sdk.vercel.ai/docs
Other
9.57k stars 1.41k forks source link

Support for OpenAI Function Calling and streaming? #80

Closed mattlgroff closed 1 year ago

mattlgroff commented 1 year ago

Is there any examples of using this with the new Function Calling feature in OpenAI's Chat Completion API?

Or is there more work needed to support this?

Thanks!

CarlosZiegler commented 1 year ago

Here is my code:

/* eslint-disable turbo/no-undeclared-env-vars */
import { StreamingTextResponse, OpenAIStream } from 'ai';

import { zValidateEnv } from '@/utils';
import { openAISchema, pineconeSchema } from '@/schemas';
import {
  Configuration,
  OpenAIApiFactory,
  OpenAIApi,
  ChatCompletionResponseMessage,
} from 'openai';

interface FunctionCall {
  name: string;
  arguments: string;
}

interface Message {
  role: string;
  content: string;
  function_call?: FunctionCall;
}

interface Function {
  name: string;
  description: string;
  parameters: object;
}

const functionDescription: Function = {
  name: 'get_current_weather',
  description: 'Get the current weather in a given location',
  parameters: {
    type: 'object',
    properties: {
      location: {
        type: 'string',
        description: 'The city and state, e.g. San Francisco, CA',
      },
      unit: {
        type: 'string',
        enum: ['celsius', 'fahrenheit'],
      },
    },
    required: ['location'],
  },
};

const { OPENAI_API_KEY } = zValidateEnv(openAISchema);

const configuration = new Configuration({
  apiKey: OPENAI_API_KEY,
});

// fake response
function get_current_weather(args: { location: string; unit: string }) {
  const weather_info = {
    location: args.location,
    temperature: '72',
    unit: args?.unit || 'fahrenheit',
    forecast: ['sunny', 'windy'],
  };
  return JSON.stringify(weather_info);
}

export async function POST(req: Request) {
  const { messages } = await req.json();

  const initialMessage = {
    role: 'user' as const,
    content: messages[messages.length - 1].content as string,
  };

  const model = new OpenAIApi(configuration);
  const response = await model.createChatCompletion({
    model: 'gpt-4-0613',
    messages: [initialMessage],
    functions: [functionDescription],
    function_call: 'auto',
  });

  const message = response?.data?.choices?.[0]?.message;

  if (message?.function_call && message.function_call.arguments) {
    const functionResponse = get_current_weather(
      JSON.parse(message.function_call.arguments)
    );

    const response = await model.createChatCompletion({
      model: 'gpt-4-0613',
      stream: true,
      messages: [
        initialMessage,
        message,
        {
          role: 'function',
          name: message.function_call.name,
          content: functionResponse,
        },
      ],
    });
    console.log('second ', response.data.choices[0].message);
    const stream = OpenAIStream(response); // => this point we get some errors, I will describe bellow. 

    return new StreamingTextResponse(stream);
  }
}

The reponse of package openai is AxiosResponse<CreateChatCompletionResponse, any> then we get some errors because the OpenAIStream of ai want a response type Response from Fetch. Then not working right now with streams. I think will be work normal if I call rest api instead of use openai package.

obs: I see that Vercel using openai-edge on some examples, BUT this library dont have a options to pass functions and functions_call as parameter of your library.

image image
CarlosZiegler commented 1 year ago

Here is an example with functions : PLease dont care on design I dont changed :) https://github.com/CarlosZiegler/next-ai

Here is the deployed version : https://next-ai-nine.vercel.app/

Tomowor I will provide more design and a readme. Thanks @Vercel this lib help a lot :)

heymartinadams commented 1 year ago

Tagging https://github.com/dan-kwiat/openai-edge/pull/7 since Vercel uses the openai-edge package. Hopefully this gets implemented soon 😍 (I mean... functions + AI 🥹)

Zakinator123 commented 1 year ago

As a first step, I've added a PR that allows for Edge Runtime route handlers to stream function-calling responses from OpenAI. #154

yutakobayashidev commented 1 year ago

It would be nice to be able to return to the client not only the response message but also the result of the function Calling. This would allow for a variety of visual representations.

Zakinator123 commented 1 year ago

I've taken a crack at the problem here: #178 . @yutakobayashidev That PR does exactly what you asked for

yutakobayashidev commented 1 year ago

@Zakinator123 This is great! For example, would it be possible to access the weather API when get_current_weather is called and implement our own weather UI based on the response? If this could be done, I think it would enable a variety of experiences beyond chatting.

Zakinator123 commented 1 year ago

@Zakinator123 This is great! For example, would it be possible to access the weather API when get_current_weather is called and implement our own weather UI based on the response? If this could be done, I think it would enable a variety of experiences beyond chatting.

Yes, that type of experience is exactly what my PR enables.

CarlosZiegler commented 1 year ago

Wow, can you provide an example if merged ? Will be great!!!!

MaxLeiter commented 1 year ago

Closing as initial support was implemented in #311. Please create new issues for specific needs you encounter and we can address them individually.

You can view the docs here: https://sdk.vercel.ai/docs/guides/functions