Chainlit / cookbook

Chainlit's cookbook repo
https://github.com/Chainlit/chainlit
709 stars 271 forks source link

openai-functions: calling the same function in a loop #45

Closed AmitSingh350 closed 8 months ago

AmitSingh350 commented 8 months ago

https://github.com/Chainlit/cookbook/tree/main/openai-functions

I have implemented my own functions in the weather example and what i am noticing is the the the app often gets in an infinite loop and breaks out only once it reaches MAX_ITER while calling the same function again and again.

willydouhard commented 8 months ago

It means that openai always output to call that function, which is weird. Are you sending to openai the output of your function once openai asked you to call it?

AmitSingh350 commented 8 months ago

Yes I am because I have not changed anything else in the code other than adding two new functions at the top.

AmitSingh350 commented 8 months ago

here is a sample code to reproduce the problem.

use the prompt generate a pie chart showing 10 apples and 20 oranges

import json, ast, os
from openai import AsyncOpenAI
import matplotlib.pyplot as plt

import chainlit as cl
from chainlit.prompt import Prompt, PromptMessage

api_key = os.environ.get("OPENAI_API_KEY")
client = AsyncOpenAI(api_key=api_key)

MAX_ITER = 5

# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit=None):
    """Get the current weather in a given location"""
    unit = unit or "Farenheit"
    weather_info = {
        "location": location,
        "temperature": "72",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }

    return json.dumps(weather_info)

def create_pie_chart(labels, sizes):

    assert len(labels) == len(sizes), "Number of labels and sizes must match"
    assert sum(sizes) > 0, "Sizes must sum to a number greater than zero"

    # Configure the pie chart to look better
    plt.figure(figsize=(8, 8))  # Set the figure size
    plt.pie(sizes, labels=labels, autopct='%1.1f%%')  # Draw the pie chart with labels and percentages
    plt.axis('equal')  # Equal aspect ratio ensures the pie is drawn as a circle.

    # Save the pie chart to a file
    plt.savefig("pie.png")
    return("chart generated")

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        },
    },

    {
        "type": "function",
        "function": {
            "name": "create_pie_chart",
            "description": "Generates a pie chart for the given data and saves it to a file.",
            "parameters": {
                "type": "object",
                "properties": {
                                "labels": {
                                    "type": "array",
                                    "description": "List of labels for each segment of the pie chart. Example: ['Cats', 'Dogs', 'Birds']",
                                    "items": {
                                        "type": "string"
                                    }
                                },
                                "sizes": {
                                    "type": "array",
                                    "description": "List of sizes for each segment, which determines the size of each slice of pie. Example: [50, 30, 20]",
                                    "items": {
                                        "type": "number"
                                    }
                                }
                            },
                            "required": ["labels", "sizes"],
            },
        },
    }
]

# Function to execute local functions based on the function name
def execute_local_function(function_name, **arguments):
    print("Executing function")
    # Mapping function names to function implementations
    functions = {
        "get_current_weather": get_current_weather,
        "create_pie_chart": create_pie_chart,
        # Add more functions here as needed
    }
    # Call the appropriate function and return the result as JSON
    func = functions.get(function_name)
    result = func(**arguments) if func else f"No such function: {function_name}"

    return json.dumps(result)

@cl.on_chat_start
def start_chat():
    cl.user_session.set(
        "message_history",
        [{"role": "system", "content": "You are a helpful assistant."}],
    )

@cl.on_message
async def run_conversation(message: cl.Message):
    message_history = cl.user_session.get("message_history")
    message_history.append({"role": "user", "content": message.content})

    cur_iter = 0

    while cur_iter < MAX_ITER:
        settings = {
            "model": "gpt-4",
            "tools": tools,
            "tool_choice": "auto",
        }

        prompt = Prompt(
            provider="openai-chat",
            messages=[
                PromptMessage(
                    formatted=m["content"], name=m.get("name"), role=m["role"]
                )
                for m in message_history
            ],
            settings=settings,
        )

        response = await client.chat.completions.create(
            messages=message_history, **settings
        )

        message = response.choices[0].message

        prompt.completion = message.content or ""

        root_msg_id = await cl.Message(
            prompt=prompt, author=message.role, content=prompt.completion
        ).send()

        if not message.tool_calls:
            break

        for tool_call in message.tool_calls:
            if tool_call.type == "function":
                function_name = tool_call.function.name
                arguments = ast.literal_eval(tool_call.function.arguments)
                await cl.Message(
                    author=function_name,
                    content=str(tool_call.function),
                    language="json",
                    parent_id=root_msg_id,
                ).send()

                # function_response = get_current_weather(
                #     location=arguments.get("location"),
                #     unit=arguments.get("unit"),
                # )

                function_response = execute_local_function(function_name, **arguments)

                message_history.append(
                    {
                        "role": "function",
                        "name": function_name,
                        "content": function_response,
                        "tool_call_id": tool_call.id,
                    }
                )

                await cl.Message(
                    author=function_name,
                    content=str(function_response),
                    language="json",
                    parent_id=root_msg_id,
                ).send()
        cur_iter += 1

here is the result 2023-12-11 19_08_57-Chatbot

skt7 commented 8 months ago

Hi @AmitSingh350, changing the return statement from chart generated to the chart was successfully created and saved to pie.png did the job. Probably GPT was expecting more clarity to reach to the finish_reason='stop' condition.

I would still suggest returning a JSON, that way you can give more information in a structured way, and GPT when used for function calling seems to understand JSON better, here is the revised implementation that returns JSON:

def create_pie_chart(labels, sizes):

    assert len(labels) == len(sizes), "Number of labels and sizes must match"
    assert sum(sizes) > 0, "Sizes must sum to a number greater than zero"

    # Configure the pie chart to look better
    plt.figure(figsize=(8, 8))  # Set the figure size
    plt.pie(sizes, labels=labels, autopct='%1.1f%%')  # Draw the pie chart with labels and percentages
    plt.axis('equal')  # Equal aspect ratio ensures the pie is drawn as a circle.

    # Save the pie chart to a file
    plt.savefig("public/pie.png")

    chart_info = {
        "labels": labels,
        "sizes": sizes,
        "file_path": "public/pie.png",
        # You can add more info here
    }

    return json.dumps(chart_info)

Also, you can see that I dumped the pie chart in the public folder and returned the same in JSON, that way chainlit was able to display it on the UI because

  1. By design chainlit serves all the content in the public folder

  2. GPT did a good job recognizing the path in JSON and created a relative path href markdown, like this: [![Pie Chart](public/pie.png)](public/pie.png) when in the function description was altered to "description": "Generates a pie chart for the given data and saves it to a file, returns the file path that assistant can use to display it using markdown format

I also mentioned about assistant to be able to server from the public folder in the system prompt to increase efficiency, as sometimes it was generating markdown syntax but not exactly referring the public folder:

@cl.on_chat_start
def start_chat():
    cl.user_session.set(
        "message_history",
        [{"role": "system", "content": "You are a helpful assistant. You can serve files from the 'public' folder using markdown href syntax."}],
    )

Here's the final output:

Screenshot (90)

Also tried a complicated query that should trigger the functions one-by-one, and it worked perfectly:

Screenshot (91)

Screenshot (92)

AmitSingh350 commented 8 months ago

Thank you @skt7. This solved the problem and great tip about the public folder. Cheers!