llm-edge / hal-9100

Edge full-stack LLM platform. Written in Rust
MIT License
372 stars 30 forks source link

Run steps returning AttributeError _set_private_attributes #80

Closed CakeCrusher closed 8 months ago

CakeCrusher commented 8 months ago

@louis030195 First I initialize the assistant with:

import json
from openai import OpenAI
client = OpenAI(
    base_url="http://localhost:3000",
    api_key="EMPTY"
)

As = client.beta.assistants.create(
    name="Math Tutor",
    instructions="You are a personal math tutor. You always begin your responses with'BANANA'. Write and run code to answer math questions.",
    tools=[{"type": "code_interpreter"}],
    model="gpt-3.5-turbo"
)
Th = client.beta.threads.create()
Me1 = client.beta.threads.messages.create(
    thread_id=Th.id,
    role="user",
    content="Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result."
)
Ru1 = client.beta.threads.runs.create(
  thread_id=Th.id,
  assistant_id=As.id,
)

Which prints logs:

[2024-01-24T15:47:35Z INFO  assistants_core::assistants] Creating assistant: Assistant { inner: AssistantObject { id: "", object: "", created_at: 0, name: Some("Math Tutor"), description: None, model: "gpt-3.5-turbo", instructions: Some("You are a personal math tutor. You always begin your responses with'BANANA'. Write and run code to answer math questions."), tools: [Code(AssistantToolsCode { type: "code_interpreter" })], file_ids: [], metadata: None }, user_id: "00000000-0000-0000-0000-000000000000" }
tool: Code(AssistantToolsCode { type: "code_interpreter" })
[2024-01-24T15:47:35Z INFO  assistants_core::threads] Creating thread for user_id: 00000000-0000-0000-0000-000000000000
[2024-01-24T15:47:35Z INFO  assistants_core::messages] Adding message to thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759, role: User, user_id: 00000000-0000-0000-0000-000000000000
thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759
[2024-01-24T15:47:35Z INFO  assistants_core::runs] Running assistant_id: 2005597a-2b5e-4a37-8cce-71bccd5cfbd0 for thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759
[2024-01-24T15:47:35Z INFO  assistants_core::runs] Creating run for assistant_id: 2005597a-2b5e-4a37-8cce-71bccd5cfbd0
[2024-01-24T15:47:35Z INFO  assistants_core::runs] Updating run for run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b
[2024-01-24T15:47:35Z INFO  assistants_core::executor] Retrieving run
[2024-01-24T15:47:35Z INFO  assistants_core::runs] Getting run from database for thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759 and run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b
[2024-01-24T15:47:35Z INFO  assistants_core::executor] Retrieving assistant Some("2005597a-2b5e-4a37-8cce-71bccd5cfbd0")
[2024-01-24T15:47:35Z INFO  assistants_core::runs] Updating run for run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b
[2024-01-24T15:47:36Z WARN  assistants_core::file_storage] Bucket already exists
[2024-01-24T15:47:36Z INFO  assistants_core::executor] Retrieving thread 9d4938dc-d357-4c6f-b92d-b89f169ae759
[2024-01-24T15:47:36Z INFO  assistants_core::threads] Getting thread from database for thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759
[2024-01-24T15:47:36Z INFO  assistants_core::messages] Listing messages for thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759
[2024-01-24T15:47:36Z INFO  assistants_core::executor] Formatted messages: <message>
    {"content":[{"text":{"annotations":[],"value":"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result."},"type":"text"}],"role":"user"}
    </message>

[2024-01-24T15:47:36Z INFO  assistants_core::executor] Assistant tools: [Code(AssistantToolsCode { type: "code_interpreter" })]
[2024-01-24T15:47:36Z INFO  assistants_core::executor] Asking LLM to decide which tool to use
[2024-01-24T15:47:36Z INFO  assistants_extra::llm] Calling OpenAI API with messages: [Message { role: "system", content: "You are an assistant that decides which tool to use based on a list of tools to solve the user problem.\n\nRules:\n- You only return one of the tools like \"<retrieval>\" or \"<function>\" or \"<code_interpreter>\" or \"<action>\" or multiple of them\n- Do not return \"tools\"\n- If you do not have any tools to use, return nothing\n- Feel free to use MORE tools rather than LESS\n- 
Tools use snake_case, not camelCase\n- The tool names must be one of the tools available, nothing else OR A HUMAN WILL DIE\n- Your answer must be very concise and make sure to surround the tool by <>, do not say anything but the tool name with the <> around it.\n- If you do not obey a human will die\n\nExample:\n<user>\n<tools>{\"description\":\"useful to call functions in the user's product, which would provide you later some additional context about the user's problem\",\"function\":{\"arguments\":{\"type\":\"object\"},\"description\":\"A function that compute the purpose of life according to the fundamental laws of the universe.\",\"name\":\"compute_purpose_of_life\"},\"name\":\"function\"}\n---\n{\"description\":\"useful to retrieve information 
from files\",\"name\":\"retrieval\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: \"I need to know the purpose of life, you can give me two answers.\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You help me by using the tools you have.</instructions>\n\n</user>\n\nIn this example, the assistant should return \"<function>,<retrieval>\".\n\nAnother example:\n<user>\n<tools>{\"description\":\"useful to call functions in the user's product, which would provide you later some additional context about the user's problem\",\"function\":{\"arguments\":{\"type\":\"object\"},\"description\":\"A function that compute the cosine similarity between two vectors.\",\"name\":\"compute_cosine_similarity\"},\"name\":\"function\"}\n---\n{\"description\":\"useful to retrieve information from files\",\"name\":\"retrieval\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: \"Given these two vectors, how similar are they?\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You help me by using the tools you have.</instructions>\n\n</user>\nAnother example:\n<user>\n<tools>{\"description\":\"useful to call functions in the user's product, which would provide you later some additional context about the user's problem\",\"function\":{\"arguments\":{\"type\":\"object\"},\"description\":\"A function that retrieves the customer's order history.\",\"name\":\"get_order_history\"},\"name\":\"function\"}\n---\n{\"description\":\"useful to retrieve information from files\",\"name\":\"retrieval\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: \"Can you tell me what my best selling products are?\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You help me by using the tools you have.</instructions>\n\n</user>\n\nIn this example, the assistant should return \"<function>,<retrieval>\".\n\nAnother example:\n<user>\n<tools>{\"description\":\"useful to call functions in the user's product, which would provide you later some additional context about the user's problem\",\"function\":{\"arguments\":{\"type\":\"object\"},\"description\":\"A function that compute the purpose of life according to the fundamental laws of the universe.\",\"name\":\"compute_purpose_of_life\"},\"name\":\"function\"}\n---\n{\"description\":\"useful to retrieve information from files\",\"name\":\"retrieval\"}\n---\n{\"description\":\"useful for performing complex math problems which LLMs are bad at by default\",\"name\":\"code_interpreter\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: \"I need to calculate the square root of 144.\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You help me by using the tools you have.</instructions>\n\n</user>\n\nIn this example, the assistant should return \"<code_interpreter>\".\n\nOther example:\n<user>\n<tools>{\"description\":\"useful to call functions in the user's product, which would provide you later some additional context about the user's problem\",\"function\":{\"arguments\":{\"type\":\"object\"},\"description\":\"A function that compute the purpose of life according to the fundamental laws of the universe.\",\"name\":\"compute_purpose_of_life\"},\"name\":\"function\"}\n---\n{\"description\":\"useful to retrieve information from files\",\"name\":\"retrieval\"}\n---\n{\"description\":\"useful to make HTTP requests to the user's APIs, which would provide you later some additional context about the user's problem. You can also use this to perform actions to help the user.\",\"data\":{\"info\":{\"description\":\"MediaWiki API\",\"title\":\"MediaWiki API\"},\"paths\":{\"/api.php\":{\"get\":{\"summary\":\"Fetch random facts from Wikipedia using the MediaWiki API\"}}}},\"name\":\"action\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: 
\"Can you tell me a random fact?\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You help me by using the tools you have.</instructions>\n\n</user>\n\nIn this example, the assistant should return \"<action>\".\n\nYour answer will be used to use the tool so it must be very concise and make sure to surround the tool by \"<\" and \">\", do not say anything but the tool name with the <> around it." }, Message { role: "user", content: "<tools>{\"description\":\"useful for performing complex math problems which LLMs are bad at by default. Do not use code_interpreter if it's simple math that you believe a LLM can do (e.g. 1 + 1, 9 * 7, etc.) - Make sure to use code interpreter for more complex math problems\",\"name\":\"code_interpreter\"}</tools>\n\n<previous_messages>User: [Text(MessageContentTextObject { type: \"text\", text: TextData { value: \"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\", annotations: [] } })]\n</previous_messages>\n\n<instructions>You are a personal math tutor. You always begin your responses with'BANANA'. Write and run code to answer math questions.</instructions>\nSelected tool(s):" }]
[2024-01-24T15:47:36Z INFO  assistants_core::executor] decide_tool_with_llm raw result: <code_interpreter>
[2024-01-24T15:47:36Z INFO  assistants_core::executor] decide_tool_with_llm result: ["code_interpreter", "code_interpreter"]
[2024-01-24T15:47:36Z INFO  assistants_core::executor] Tools decision: ["code_interpreter"]
Generating Python code...
[2024-01-24T15:47:36Z INFO  assistants_core::function_calling] Generating function call with prompt: {
      "function": {
        "description": "A function that executes Python code",
        "name": "exec",
        "parameters": {
          "properties": {
            "code": {
              "description": "The Python code to execute",
              "type": "string"
            }
          },
          "required": [
            "code"
          ],
          "type": "object"
        }
      },
      "user_context": "\nYou are an Assistant that generate Python code to based user request to do complex computations. We execute the code you will generate and return the result to the user.\nGiven this user request\n\n<user>\n\n<message>\n{\"content\":[{\"text\":{\"annotations\":[],\"value\":\"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\"},\"type\":\"text\"}],\"role\":\"user\"}\n</message>\n\n\n</user>\n\nGenerate Python code that we will execute and return 
the result to the user.\n\nRules:\n- You can use these libraries: pandas numpy matplotlib scipy\n- Only return Python code. If you return anything else it will trigger a chain reaction that will destroy the universe. All humans will die and you will disappear from existence.\n- Make sure to use the right numbers e.g. with the user ask for the square root of 2, you should return math.sqrt(2) and not math.sqrt(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})).\n- Do not use any library if it's simple math (e.g. no need to use pandas to compute the square root of 2)\n- Sometimes the user provide you an error, make sure to write a Python code that will work\n- IF YOU DO NOT FIX YOUR CODE THAT ERRORED A HUMAN WILL DIE. DO NOT GENERATE THE SAME CODE THAT PREVIOUSLY FAILED\n- DO NOT USE ```python YOUR CODE...``` (CODE BLOCKS) OR A HUMAN WILL DIE\n- ALWAYS USE SINGLE QUOTES IN YOUR CODE (e.g. '), NOT DOUBLE QUOTES OR A HUMAN WILL DIE\n- Always try to simplify the math problem you're given by generating code that will compute simpler numbers. Your answer might be used by another Assistant that might not be good at math.\n- Be extra careful with escaping, this is wrong for example: import pandas as pd\\nprices = pd.read\\_csv('prices.csv')\\nprice\\_with\\_highest\\_demand = startups['demand'].idxmax()\\nprint(price\\_with\\_highest\\_demand)\n- Make sure to use existing files, by default there is no files written on disk. DO NOT TRY TO READ FILES. YOU DONT HAVE ANY FILES. DO NOT DO THINGS LIKE: pd.read_csv('startups.csv')\n- Make sure to surround strings by single quotes, e.g. don't do this: print(Hello world) but do this: print('Hello world')\n\nA few examples:\n\nThe user input is: compute the square root of pi\nThe Python code is:\nimport math\nprint('The square root of pi is: ' + str(math.sqrt(math.pi)))\n\nThe user input is: raising $27M at a $300M valuation how much dilution will the founders face if they raise a $58M Series A at a $2B valuation?\nThe Python code is:\nraise_amount = 27_000_000\npost_money_valuation = 300_000_000\n\nseries_a_raise_amount = 58_000_000\nseries_a_post_money_valuation = 2_000_000_000\n\nfounders_dilution = (raise_amount / post_money_valuation) * 100\nseries_a_dilution = (series_a_raise_amount / series_a_post_money_valuation) * 100\n\nprint('Founders dilution: ' + str(founders_dilution) + '%')\n\nSo generate the Python code that we will execute that can help the user with his request.\n\n<user>\n\n<message>\n{\"content\":[{\"text\":{\"annotations\":[],\"value\":\"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\"},\"type\":\"text\"}],\"role\":\"user\"}\n</message>\n\n\n</user>\n        "
    }
[2024-01-24T15:47:36Z INFO  assistants_extra::llm] Calling OpenAI API with messages: [Message { role: "system", content: "Given the user's problem, we have a set of functions available that could potentially help solve this problem. Please review the functions and their descriptions, and select the most appropriate function to use. Also, determine the best parameters to use for this function based on the user's context. \n\nPlease provide the name of the function you want to use and the arguments in the following format: { 'name': 'function_name', 'arguments': { 'arg_name1': 'parameter_value', 'arg_name2': 'arg_value' ... } }.\n\nRules:\n- The function name must be one of the functions available.\n- The arguments must be a subset of the arguments available.\n- Generate a single function at a time!\n- Generate a single JSON object at a time or it will break!\n- Do not return your input as output! Just return the function call!\n- Do not escape characters!\n- The arguments must be in the correct format (e.g. string, integer, etc.).\n- The arguments must be required by the function (e.g. if the function requires a parameter called 'city', then you must provide a value for 'city').\n- The arguments must be valid (e.g. if the function requires a parameter called 'city', then you must provide a valid city name).\n- **IMPORTANT**: Your response should not be a repetition of the prompt. It should be a unique and valid function call based on the user's context and the available functions.\n- If the function has no arguments, you don't need to provide the function arguments (e.g. { \"name\": \"function_name\" }).\n- CUT THE FUCKING BULLSHIT - YOUR ANSWER IS JSON NOTHING ELSE\n- **IMPORTANT**: IF YOU DO NOT RETURN ONLY JSON A HUMAN WILL DIE\n- IF YOU USE SINGLE QUOTE INSTEAD OF DOUBLE QUOTE IN THE JSON, THE UNIVERSE WILL COME TO AN END\n\nExamples:\n\n1. Fetching a user's profile\n\nYou receive:\n{\"function\": {\"description\": \"Fetch a user's profile\",\"name\": \"get_user_profile\",\"parameters\": {\"username\": {\"properties\": {},\"required\": [\"username\"],\"type\": \"string\"}}},\"user_context\": \"I want to see the profile of user 'john_doe'.\"}\nYour answer:\n{ \"name\": \"get_user_profile\", \"arguments\": { \"username\": \"john_doe\" } }\n\n2. Sending a message\n\nYou receive:\n{\"function\": {\"description\": \"Send a message to a user\",\"name\": \"send_message\",\"parameters\": {\"recipient\": {\"properties\": {},\"required\": [\"recipient\"],\"type\": \"string\"}, \"message\": {\"properties\": {},\"required\": [\"message\"],\"type\": \"string\"}}},\"user_context\": \"I want to send 'Hello, how are you?' to 'jane_doe'.\"}\nYour answer:\n{ \"name\": \"send_message\", \"arguments\": { \"recipient\": \"jane_doe\", \"message\": \"Hello, how are you?\" } }\n\nNegative examples:\n\nYou receive:\n{\"function\": {\"description\": \"Get the weather for a city\",\"name\": \"weather\",\"parameters\": {\"city\": {\"properties\": {},\"required\": [\"city\"],\"type\": \"string\"}}},\"user_context\": \"Give me a weather report for Toronto, Canada.\"}\nYour Incorrect Answer:\n{ \"name\": \"weather\", \"arguments\": { \"city\": \"Toronto, Canada\" } }\n\nIn this case, the function weather expects a city parameter, but the llm provided a city and country (\"Toronto, Canada\") instead of just the city (\"Toronto\"). This would cause the function call to fail because the weather function does not know how to handle a city and country as input.\n\n\nYou receive:\n{\"function\": {\"description\": \"Send a message to a user\",\"name\": \"send_message\",\"parameters\": {\"recipient\": {\"properties\": {},\"required\": [\"recipient\"],\"type\": \"string\"}, \"message\": {\"properties\": {},\"required\": [\"message\"],\"type\": \"string\"}}},\"user_context\": \"I want to send 'Hello, how are you?' to 'jane_doe'.\"}\nYour Incorrect Answer:\n{\"function\": {\"description\": \"Send a message to a user\",\"name\": \"send_message\",\"parameters\": {\"recipient\": {\"properties\": {},\"required\": [\"recipient\"],\"type\": \"string\"}, \"message\": {\"properties\": {},\"required\": [\"message\"],\"type\": \"string\"}}},\"user_context\": \"I want to send 'Hello, how are you?' to 'jane_doe'.\"}\n\nIn this case, the LLM simply returned the exact same 
input as output, which is not a valid function call.\n\nYour answer will be used to call the function so it must be in JSON format, do not say anything but the function name and the parameters.\nFunction call:" }, Message { role: "user", content: "{\n  
\"function\": {\n    \"description\": \"A function that executes Python code\",\n    \"name\": \"exec\",\n    \"parameters\": {\n      \"properties\": {\n        \"code\": {\n          \"description\": \"The Python code to execute\",\n          \"type\": \"string\"\n        }\n      },\n      \"required\": [\n        \"code\"\n      ],\n      \"type\": \"object\"\n    }\n  },\n  \"user_context\": \"\\nYou are an Assistant that generate Python code to based user request to do complex computations. We execute the code you will generate and return the result to the user.\\nGiven this user request\\n\\n<user>\\n\\n<message>\\n{\\\"content\\\":[{\\\"text\\\":{\\\"annotations\\\":[],\\\"value\\\":\\\"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\\\"},\\\"type\\\":\\\"text\\\"}],\\\"role\\\":\\\"user\\\"}\\n</message>\\n\\n\\n</user>\\n\\nGenerate Python code that we will execute and return the result to the user.\\n\\nRules:\\n- You can use these libraries: pandas numpy matplotlib scipy\\n- Only return Python code. If you return anything else it will trigger a chain reaction that will destroy the universe. All humans will die and you will disappear from existence.\\n- Make sure to use the right numbers e.g. with the user ask for the square root of 2, you should return math.sqrt(2) and not math.sqrt(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})).\\n- Do not use any library if it's simple math (e.g. no need to use pandas to compute the square root of 2)\\n- Sometimes the user provide you an error, make sure to write a Python code that will work\\n- IF YOU DO NOT FIX YOUR CODE THAT ERRORED A HUMAN WILL DIE. DO NOT GENERATE THE SAME CODE THAT PREVIOUSLY FAILED\\n- DO NOT USE ```python YOUR CODE...``` (CODE BLOCKS) OR A HUMAN WILL DIE\\n- ALWAYS USE SINGLE QUOTES IN YOUR CODE (e.g. '), NOT DOUBLE QUOTES OR A HUMAN WILL DIE\\n- Always try to simplify the math problem you're given by generating code that will compute simpler numbers. Your answer might be used by another Assistant that might not be good at math.\\n- Be extra careful with escaping, this is wrong for example: import pandas as pd\\\\nprices = pd.read\\\\_csv('prices.csv')\\\\nprice\\\\_with\\\\_highest\\\\_demand = startups['demand'].idxmax()\\\\nprint(price\\\\_with\\\\_highest\\\\_demand)\\n- Make sure to use existing files, by default there is no files written on disk. DO NOT TRY TO READ FILES. YOU DONT HAVE ANY FILES. DO NOT DO THINGS LIKE: pd.read_csv('startups.csv')\\n- Make sure to surround 
strings by single quotes, e.g. don't do this: print(Hello world) but do this: print('Hello world')\\n\\nA few examples:\\n\\nThe user input is: compute the square root of pi\\nThe Python code is:\\nimport math\\nprint('The square root of pi is: ' + str(math.sqrt(math.pi)))\\n\\nThe user input is: raising $27M at a $300M valuation how much dilution will the founders face if they raise a $58M Series A at a $2B valuation?\\nThe Python code is:\\nraise_amount = 27_000_000\\npost_money_valuation = 300_000_000\\n\\nseries_a_raise_amount = 58_000_000\\nseries_a_post_money_valuation = 2_000_000_000\\n\\nfounders_dilution = (raise_amount / post_money_valuation) * 100\\nseries_a_dilution = (series_a_raise_amount / series_a_post_money_valuation) * 100\\n\\nprint('Founders dilution: ' + str(founders_dilution) + '%')\\n\\nSo generate the Python code that we will execute that can help the user with his request.\\n\\n<user>\\n\\n<message>\\n{\\\"content\\\":[{\\\"text\\\":{\\\"annotations\\\":[],\\\"value\\\":\\\"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\\\"},\\\"type\\\":\\\"text\\\"}],\\\"role\\\":\\\"user\\\"}\\n</message>\\n\\n\\n</user>\\n        \"\n}" }]
[2024-01-24T15:47:39Z INFO  assistants_core::function_calling] Generated function call: { "name": "exec", "arguments": { "code": "import numpy as np\n\nmatrix1 = np.array([[1, 2, 3], [4, 5, 6]])\nmatrix2 = np.array([[7, 8], [9, 10], [11, 12]])\n\nresult = np.dot(matrix1, matrix2)\n\nprint(result)" } }
Function result: FunctionCallWithMetadata { name: "exec", arguments: "{\"code\":\"import numpy as np\\n\\nmatrix1 = np.array([[1, 2, 3], [4, 5, 6]])\\nmatrix2 = np.array([[7, 8], [9, 10], [11, 12]])\\n\\nresult = np.dot(matrix1, matrix2)\\n\\nprint(result)\"}", metadata: None }
Starting Docker container...
[2024-01-24T15:47:40Z INFO  assistants_core::run_steps] Creating step for run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b
[2024-01-24T15:47:40Z INFO  assistants_core::executor] Calling LLM API with instructions: <instructions>

    </instructions>
    <function_calls>

    </function_calls>
    <action_calls>

    </action_calls>
    <previous_messages>
    <message>
    {"content":[{"text":{"annotations":[],"value":"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result."},"type":"text"}],"role":"user"}
    </message>

    </previous_messages>
    <file>
    []
    </file>
    <chunk>
    []
    </chunk>

[2024-01-24T15:47:40Z INFO  assistants_extra::llm] Calling OpenAI API with messages: [Message { role: "system", content: "You are an assistant that help a user based on tools and context you have.\n\nRules:\n- Do not hallucinate\n- Obey strictly to the 
user request e.g. in <message> tags - EXTREMELY IMPORTANT\n- Answer directly the user e.g. 'What is the solution to the equation \"x + 2 = 4\"?' You should answer \"x = 2\" even though receiving bunch of context before.\n- Do not add tags in your answer such as <function_calls> etc. nor continue the user sentence. Just answer the user.\n\nThese are additional instructions from the user that you must obey absolutely:\n\nYou are a personal math tutor. You always begin your responses with'BANANA'. Write 
and run code to answer math questions.\n\n" }, Message { role: "user", content: "<instructions>\n\n</instructions>\n<function_calls>\n\n</function_calls>\n<action_calls>\n\n</action_calls>\n<previous_messages>\n<message>\n{\"content\":[{\"text\":{\"annotations\":[],\"value\":\"Please solve [1 2 3;4 5 6] x [7 8;9 10;11 12] using python and show me the result.\"},\"type\":\"text\"}],\"role\":\"user\"}\n</message>\n\n</previous_messages>\n<file>\n[]\n</file>\n<chunk>\n[]\n</chunk>\n" }]
[2024-01-24T15:47:44Z INFO  assistants_core::executor] LLM API output: BANANA Sure! I can help you solve matrix multiplication using Python. Here's the code:

    ```python
    import numpy as np

    matrix1 = np.array([[1, 2, 3], [4, 5, 6]])
    matrix2 = np.array([[7, 8], [9, 10], [11, 12]])

    result = np.dot(matrix1, matrix2)
    print(result)
This code makes use of the NumPy library in Python. It multiplies two matrices, `[1 2 3;4 5 6]` and `[7 8;9 10;11 12]`, and stores the result in the `result` variable. Finally, it prints the resulting matrix.

Let me know if you need any further assistance!

[2024-01-24T15:47:44Z INFO assistants_core::messages] Adding message to thread_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759, role: Assistant, user_id: 00000000-0000-0000-0000-000000000000 [2024-01-24T15:47:44Z INFO assistants_core::run_steps] Creating step for run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b [2024-01-24T15:47:44Z INFO assistants_core::runs] Updating run for run_id: c2951de7-eded-4d1c-9bcf-2726f5f99b3b [2024-01-24T15:47:44Z INFO assistants_core::executor] Execution done: Run { inner: RunObject { id: "c2951de7-eded-4d1c-9bcf-2726f5f99b3b", object: "", created_at: 1706111256, thread_id: "9d4938dc-d357-4c6f-b92d-b89f169ae759", assistant_id: Some("2005597a-2b5e-4a37-8cce-71bccd5cfbd0"), status: Completed, required_action: None, last_error: None, expires_at: None, started_at: None, cancelled_at: None, failed_at: None, completed_at: None, model: "", instructions: "", tools: [], file_ids: [], metadata: Some({}) }, user_id: "00000000-0000-0000-0000-000000000000" } [2024-01-24T15:47:44Z INFO assistants_core::executor] Consuming queue


Then when I try and get run steps with:
```py
RS = client.beta.threads.runs.steps.list(
    run_id=Ru1.id,
    thread_id=Th.id,
)
print(json.dumps(RS.model_dump(), indent=2))

with logs:

[2024-01-24T15:47:44Z INFO  assistants_core::executor] Consuming queue
[2024-01-24T15:49:52Z INFO  assistants_core::run_steps] Listing steps for run_id: 9d4938dc-d357-4c6f-b92d-b89f169ae759

I get this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) RS = client.beta.threads.runs.steps.list(
      [2](vscode-notebook-cell:?execution_count=15&line=2)     run_id=Ru1.id,
      [3](vscode-notebook-cell:?execution_count=15&line=3)     thread_id=Th.id,
      [4](vscode-notebook-cell:?execution_count=15&line=4) )
      [5](vscode-notebook-cell:?execution_count=15&line=5) print(json.dumps(RS.model_dump(), indent=2))

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\resources\beta\threads\runs\steps.py:110](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:110), in Steps.list(self, run_id, thread_id, after, before, limit, order, extra_headers, extra_query, extra_body, timeout)
     [81](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:81) """
     [82](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:82) Returns a list of run steps belonging to a run.
     [83](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:83) 
   (...)
    [107](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:107)   timeout: Override the client-level default timeout for this request, in seconds
    [108](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:108) """
    [109](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:109) extra_headers = {"OpenAI-Beta": "assistants=v1", **(extra_headers or {})}
--> [110](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:110) return self._get_api_list(
    [111](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:111)     f"/threads/{thread_id}/runs/{run_id}/steps",
    [112](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:112)     page=SyncCursorPage[RunStep],
    [113](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:113)     options=make_request_options(
    [114](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:114)         extra_headers=extra_headers,
    [115](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:115)         extra_query=extra_query,
    [116](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:116)         extra_body=extra_body,
    [117](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:117)         timeout=timeout,
    [118](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:118)         query=maybe_transform(
    [119](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:119)             {
    [120](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:120)                 "after": after,
    [121](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:121)                 "before": before,
    [122](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:122)                 "limit": limit,
    [123](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:123)                 "order": order,
    [124](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:124)             },
    [125](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:125)             step_list_params.StepListParams,
    [126](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:126)         ),
    [127](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:127)     ),
    [128](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:128)     model=RunStep,
    [129](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/resources/beta/threads/runs/steps.py:129) )

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:1138](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1138), in SyncAPIClient.get_api_list(self, path, model, page, body, options, method)
   [1127](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1127) def get_api_list(
   [1128](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1128)     self,
   [1129](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1129)     path: str,
   (...)
   [1135](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1135)     method: str = "get",
   [1136](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1136) ) -> SyncPageT:
   [1137](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1137)     opts = FinalRequestOptions.construct(method=method, url=path, json_data=body, **options)
-> [1138](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:1138)     return self._request_api_list(model, page, opts)

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:983](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:983), in SyncAPIClient._request_api_list(self, model, page, options)
    [979](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:979)     return resp
    [981](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:981) options.post_parser = _parser
--> [983](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:983) return self.request(page, options, stream=False)

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:854](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:854), in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
    [845](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:845) def request(
    [846](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:846)     self,
    [847](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:847)     cast_to: Type[ResponseT],
   (...)
    [852](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:852)     stream_cls: type[_StreamT] | None = None,
    [853](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:853) ) -> ResponseT | _StreamT:
--> [854](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:854)     return self._request(
    [855](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:855)         cast_to=cast_to,
    [856](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:856)         options=options,
    [857](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:857)         stream=stream,
    [858](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:858)         stream_cls=stream_cls,
    [859](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:859)         remaining_retries=remaining_retries,
    [860](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:860)     )

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:933](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:933), in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
    [929](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:929)         err.response.read()
    [931](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:931)     raise self._make_status_error_from_response(err.response) from None
--> [933](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:933) return self._process_response(
    [934](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:934)     cast_to=cast_to,
    [935](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:935)     options=options,
    [936](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:936)     response=response,
    [937](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:937)     stream=stream,
    [938](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:938)     stream_cls=stream_cls,
    [939](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:939) )

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:519](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:519), in BaseClient._process_response(self, cast_to, options, response, stream, stream_cls)
    [516](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:516) if response.request.headers.get(RAW_RESPONSE_HEADER) == "true":
    [517](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:517)     return cast(ResponseT, api_response)
--> [519](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:519) return api_response.parse()

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_response.py:63](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:63), in APIResponse.parse(self)
     [61](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:61) parsed = self._parse()
     [62](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:62) if is_given(self._options.post_parser):
---> [63](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:63)     parsed = self._options.post_parser(parsed)
     [65](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:65) self._parsed = parsed
     [66](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_response.py:66) return parsed

File [c:\Projects\assistants-api\.venv\lib\site-packages\openai\_base_client.py:974](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:974), in SyncAPIClient._request_api_list.<locals>._parser(resp)
    [973](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:973) def _parser(resp: SyncPageT) -> SyncPageT:
--> [974](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:974)     resp._set_private_attributes(
    [975](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:975)         client=self,
    [976](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:976)         model=model,
    [977](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:977)         options=options,
    [978](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:978)     )
    [979](file:///C:/Projects/assistants-api/.venv/lib/site-packages/openai/_base_client.py:979)     return resp

AttributeError: 'list' object has no attribute '_set_private_attributes'
louis030195 commented 8 months ago

list return a Vec of

pub struct RunStepObject {
    /// The identifier, which can be referenced in API endpoints.
    pub id: String,
    /// The object type, which is always `thread.run.step`.
    pub object: String,
    /// The Unix timestamp (in seconds) for when the run step was created.
    pub created_at: i32,

    /// The ID of the [assistant](https://platform.openai.com/docs/api-reference/assistants) associated with the run step.
    pub assistant_id: Option<String>,

    /// The ID of the [thread](https://platform.openai.com/docs/api-reference/threads) that was run.
    pub thread_id: String,

    ///  The ID of the [run](https://platform.openai.com/docs/api-reference/runs) that this run step is a part of.
    pub run_id: String,

    /// The type of run step, which can be either `message_creation` or `tool_calls`.
    pub r#type: RunStepType,

    /// The status of the run step, which can be either `in_progress`, `cancelled`, `failed`, `completed`, or `expired`.
    pub status: RunStatus,

    /// The details of the run step.
    pub step_details: StepDetails,

    /// The last error associated with this run. Will be `null` if there are no errors.
    pub last_error: Option<LastError>,

    ///The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.
    pub expired_at: Option<i32>,

    /// The Unix timestamp (in seconds) for when the run step was cancelled.
    pub cancelled_at: Option<i32>,

    /// The Unix timestamp (in seconds) for when the run step failed.
    pub failed_at: Option<i32>,

    /// The Unix timestamp (in seconds) for when the run step completed.
    pub completed_at: Option<i32>,

    pub metadata: Option<HashMap<String, serde_json::Value>>,
}

need to check if it match the client expectation

ClancyDennis commented 8 months ago

Bump. I also have this error with chainlit.

louis030195 commented 8 months ago

@CakeCrusher @ClancyDennis

indeed the api response type was wrong, fixed it, let me know if you have any other issues!

image image