Chainlit / cookbook

Chainlit's cookbook repo
https://github.com/Chainlit/chainlit
713 stars 270 forks source link

Open AI Assitant functions #41

Open ronfromhp opened 9 months ago

ronfromhp commented 9 months ago

error: Traceback (most recent call last): File "D:\Documendz\code-python\LLM chat app\aienv\lib\site-packages\chainlit\utils.py", line 39, in wrapper return await user_function(**params_values) File "D:\Documendz\code-python\LLM chat app\app2.py", line 120, in run_conversation if tool_call.type == "code_interpreter": AttributeError: 'dict' object has no attribute 'type'

code:

    # Periodically check for updates
    while True:
        run = await client.beta.threads.runs.retrieve(
            thread_id=thread.id, run_id=run.id
        )

        # Fetch the run steps
        run_steps = await client.beta.threads.runs.steps.list(
            thread_id=thread.id, run_id=run.id, order="asc"
        )

        for step in run_steps.data:
            # Fetch step details
            run_step = await client.beta.threads.runs.steps.retrieve(
                thread_id=thread.id, run_id=run.id, step_id=step.id
            )
            step_details = run_step.step_details
            # Update step content in the Chainlit UI
            if step_details.type == "message_creation":
                thread_message = await client.beta.threads.messages.retrieve(
                    message_id=step_details.message_creation.message_id,
                    thread_id=thread.id,
                )

                await process_thread_message(message_references, thread_message)

            if step_details.type == "tool_calls":
                for tool_call in step_details.tool_calls:
                    if tool_call.type == "code_interpreter":
                        if not tool_call.id in message_references:
                            message_references[tool_call.id] = cl.Message(
                                author=tool_call.type,
                                content=tool_call.code_interpreter.input
                                or "# Generating code...",
                                language="python",
                                parent_id=context.session.root_message.id,
                            )
                            await message_references[tool_call.id].send()
                        else:
                            message_references[tool_call.id].content = (
                                tool_call.code_interpreter.input
                                or "# Generating code..."
                            )
                            await message_references[tool_call.id].update()

                        tool_output_id = tool_call.id + "output"

                        if not tool_output_id in message_references:
                            message_references[tool_output_id] = cl.Message(
                                author=f"{tool_call.type}_result",
                                content=str(tool_call.code_interpreter.outputs) or "",
                                language="json",
                                parent_id=context.session.root_message.id,
                            )
                            await message_references[tool_output_id].send()
                        else:
                            message_references[tool_output_id].content = (
                                str(tool_call.code_interpreter.outputs) or ""
                            )
                            await message_references[tool_output_id].update()

                    elif tool_call.type == "retrieval":
                        if not tool_call.id in message_references:
                            message_references[tool_call.id] = cl.Message(
                                author=tool_call.type,
                                content="Retrieving information",
                                parent_id=context.session.root_message.id,
                            )
                            await message_references[tool_call.id].send()

                    # below part doesnt work yet, crashes gpt for some reason
                    elif tool_call.type == "function":
                        function_name = tool_call.function.name
                        function_to_call = get_taxi_booking_information
                        function_args = json.loads(tool_call.function.arguments)
                        function_response = function_to_call(**function_args)
                        print(function_response)
                        if not tool_call.id in message_references:
                            message_references[tool_call.id] = cl.Message(
                                author=tool_call.type,
                                content=function_response,
                                parent_id=context.session.root_message.id,
                            )
                            await message_references[tool_call.id].send()

        await cl.sleep(1)  # Refresh every second

        if run.status in ["cancelled", "failed", "completed", "expired"]:
            break

all i tried to do was add a function type tool call in the same format as the other tool calls. To reproduce this error/bug just add a function to the tools while initialising an assistant and then use this code

willydouhard commented 9 months ago

Just updated the example. I was mostly able to run your code, maybe update your open ai package? I started to add support for functions. Right now it stops the execution, what you want to do is to actually call your function and send the response to the assistant.

ronfromhp commented 9 months ago

My bad, sorry for not specifying exactly how the bug appears. Initially it will work like typical but once you specify your intent to run the custom function, it tries to execute the function which then triggers the exception.

For example, to recreate the bug simply create a function called get_weather with the apt description and then ask the model to get the weather.

also made changes you suggested without any success

 elif tool_call.type == "function":
                        function_name = tool_call.function.name
                        function_args = json.loads(tool_call.function.arguments)

                        if not tool_call.id in message_references:
                            message_references[tool_call.id] = cl.Message(
                                author=function_name,
                                content=function_args,
                                language="json",
                                parent_id=context.session.root_message.id,
                            )
                            await message_references[tool_call.id].send()

                        func_message = await client.beta.threads.messages.create(
                            thread_id=thread.id, role='user', content=function_args, 
                        )
                        # raise NotImplementedError(
                        #     "Implement your function call here and send the response to the assistant"
                        # )

also ive debugged to determine that the code never enters into the elif tool_call.type=="function" block when this error occurs

willydouhard commented 9 months ago

Then the fault is in your own function code if I understand correctly?

ronfromhp commented 9 months ago

No i meant to say that this error happens no matter what function you use. You can define your own function and see what happens when you ask the model to call it. It can't be the function config or the function call itself since the function was never being called in my example

willydouhard commented 9 months ago

Just updated the example again.

Screenshot 2023-11-15 at 22 54 07

Running openai example on weather function works on my end

ronfromhp commented 9 months ago

I keep running into the same error in my system even with the latest code. I think i'll leave this issue open for anyone with the same seemingly unreproducable problem. for clarity i was using openai v1.3.0 and running on windows 11

willydouhard commented 9 months ago

What happens if you create an assistant with the example weather function and runs exactly the cookbook code?

ronfromhp commented 9 months ago

i did create an assistant with the weather function and ran the exact code in app.py in the cookbook and here's how it went. image


  File "D:\Documendz\code-python\LLM chat app\aienv\lib\site-packages\chainlit\utils.py", line 39, in wrapper     
    return await user_function(**params_values)
  File "D:\Documendz\code-python\LLM chat app\app2.py", line 116, in run_conversation
    if tool_call.type == "code_interpreter":
AttributeError: 'dict' object has no attribute 'type'```
willydouhard commented 9 months ago

This is really weird, can you print tool_call? I fail to see why the openai sdk would sometimes return a class instance and sometimes a dict. Maybe this is known issue on their end?

ronfromhp commented 9 months ago
CodeToolCall(id='call_ISmaA6hGQKgpoB2lhakjcUQb', code_interpreter=CodeInterpreter(input='# Perform the simple arithmetic operation of 9 + 10\nresult = 9 + 10\nresult', outputs=[CodeInterpreterOutputLogs(logs='19', type='logs')]), type='code_interpreter')
2023-11-16 04:41:42 - HTTP Request: GET https://api.openai.com/v1/threads/thread_eUIg5Wo60YlEW273NzzHT9Nb/runs/run_QxbW6TGqrfOvtHZH4Jj3UX9w/steps/step_6LJRZKLldi8U1sYHnfXz3vcg "HTTP/1.1 200 OK"
.
.
.
https://api.openai.com/v1/threads/thread_eUIg5Wo60YlEW273NzzHT9Nb/runs/run_87WajPnIDvjVrSJjdldZHluo/steps/step_BVdLZBchbq2mCDJmubfsA3GD "HTTP/1.1 200 OK"
{'id': 'call_cj1ZciInLlfQHQD28GOrir4R', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"location":"New York"}'}}
2023-11-16 04:42:53 - 'dict' object has no attribute 'type'
Traceback (most recent call last):
  File "D:\Documendz\code-python\LLM chat app\aienv\lib\site-packages\chainlit\utils.py", line 39, in wrapper     
    return await user_function(**params_values)
  File "D:\Documendz\code-python\LLM chat app\app2.py", line 117, in run_conversation
    if tool_call.type == "code_interpreter":
AttributeError: 'dict' object has no attribute 'type'

i wish i did this sooner to explain the bug. the corresponding chat: image

willydouhard commented 9 months ago

Okay, I still fail to understand why you get a dict when I get a class instance. An easy fix is too check

if isinstance(tool_call, dict):
  # process
else:
#process
dividor commented 9 months ago

Thanks for the amazing cookbook app, so quick!

Just to add, I get exactly the same error. Basically on first turn tool_callis coming through as a dictionary, not an object ...

MessageCreationStepDetails(message_creation=None, type='tool_calls', tool_calls=[{'id': 'call_6YTVEg6DNoOrTAwMX8ehtE4d', 'type': 'function', 'function': {'name': 'rweb_data_api', 'arguments': ''}}])

I tried converting all subsequent code to make tool_call a dictionary and call my function, which worked, but then the response from that came through as another type='tool' and tool_call was now an object.

FunctionToolCall(id='call_Aw6pj1Z0tCXPoT1d744vvusZ', function=Function(arguments='{"keyword":"Nepal earthquake"}', name='rweb_data_api', output='\n\nProvincial Update #11 Earthquake Response – Karnali Province (18 November 2023)\n\n[\'\\nNepal\\n\', \'This repo ...

I could fix using 'isinstance' but one wonders if this might duplicate code and it doesn't fix the root error. I'll keep poking around, but thought I should mention.

Here are my package versions ...

openai 1.3.5 chainlit 0.7.604

willydouhard commented 9 months ago

Thank you for your feedback!

We can definitely make the code handle both of the cases (would love a PR btw) but this inconsistency still looks weird to me. Maybe we should open an issue on OpenAI's repo?

dividor commented 9 months ago

In case it's useful, I was able to resolve the issue by first creating a class ...

class DictToObject:
    def __init__(self, dictionary):
        for key, value in dictionary.items():
            setattr(self, key, value)

Then having this code to monitor tool_call and convert as-needed

if isinstance(tool_call, dict):
        print(tool_call)
        tool_call = DictToObject(tool_call)
        if tool_call.type == "function":
            function = DictToObject(tool_call.function)
            tool_call.function = function
        if tool_call.type == "code_interpreter":
            code_interpretor = DictToObject(tool_call.code_interpretor)
            tool_call.code_interpretor = code_interpretor

Not very elegant though, there's something deeper down that could perhaps fix.

ronfromhp commented 9 months ago

i think a better way would be to check the pip freeze of willy's environment

willydouhard commented 9 months ago

I can provide that, also going to check openai's python sdk repo issues

ronfromhp commented 8 months ago

In case it's useful, I was able to resolve the issue by first creating a class ...

class DictToObject:
    def __init__(self, dictionary):
        for key, value in dictionary.items():
            setattr(self, key, value)

Then having this code to monitor tool_call and convert as-needed

if isinstance(tool_call, dict):
        print(tool_call)
        tool_call = DictToObject(tool_call)
        if tool_call.type == "function":
            function = DictToObject(tool_call.function)
            tool_call.function = function
        if tool_call.type == "code_interpreter":
            code_interpretor = DictToObject(tool_call.code_interpretor)
            tool_call.code_interpretor = code_interpretor

Not very elegant though, there's something deeper down that could perhaps fix.

I've solved it with slightly less better code and other changes in this pr https://github.com/Chainlit/cookbook/pull/43