Azure-Samples / azureai-assistant-tool

The Azure AI Assistant Tool is experimental Python application and middleware designed to simplify the development, experimentation, testing, and debugging of OpenAI assistants.
MIT License
113 stars 48 forks source link

AttributeError: 'NoneType' object has no attribute 'content' #45

Closed sankethgadadinni closed 2 months ago

sankethgadadinni commented 2 months ago

Hi, I have made some modification to be the code base from the examples.

This is my code. If i'm making constant calls to this agent, I'm getting AttributeError: 'NoneType' object has no attribute 'content' error. Can you guys help?


import os
from app.config import Config

from azure.ai.assistant.management.assistant_client import AssistantClient
from azure.ai.assistant.management.conversation_thread_client import ConversationThreadClient
from azure.ai.assistant.management.ai_client_factory import AIClientType

os.environ['AZURE_OPENAI_API_KEY'] = Config.OPENAI_API_KEY
os.environ['AZURE_OPENAI_ENDPOINT'] = Config.OPENAI_API_BASE
os.environ['AZURE_OPENAI_API_VERSION'] = Config.OPENAI_API_VERSION
os.environ['AZURE_OPENAI_DEPLOYMENT'] = Config.deployment_name

def run_conversation(path, assistant_name, user_message):
    try:
        with open(os.path.join(path, f"{assistant_name}_assistant_config.yaml"), "r") as file:
            config = file.read()

        # Retrieve the assistant client
        assistant_client = AssistantClient.from_yaml(config)

        # Create a new conversation thread client
        ai_client_type = AIClientType[assistant_client.assistant_config.ai_client_type]
        conversation_thread_client = ConversationThreadClient.get_instance(ai_client_type)

        # Create a new conversation thread
        thread_name = conversation_thread_client.create_conversation_thread()

        # Create a message to the conversation thread
        conversation_thread_client.create_conversation_thread_message(user_message, thread_name)

        # Process the user messages
        assistant_client.process_messages(thread_name=thread_name)

        # Retrieve the conversation
        conversation = conversation_thread_client.retrieve_conversation(thread_name)

        # Print the last assistant response from the conversation
        assistant_message = conversation.get_last_text_message(assistant_client.name)

        if assistant_message is None:
            raise ValueError("The assistant did not return a valid message.")

        return assistant_message.content

    except Exception as e:
        print(f"Error While generating insights: {e}")
        raise e
jhakulin commented 2 months ago

Hello,

Thanks for the report. What you have for the model deployment name? Seems I can reproduce the issue with gpt4o-mini and looking at the logs there is service side error. I will report that to service team and will follow-up.

With gpt4-1106-preview, the scenario works ok.

sankethgadadinni commented 2 months ago

Hi @jhakulin,

I'm using gpt-4o. I have hosted this agent as a FastAPI service. Whenever there are more than two concurrent calls , it ends up with this error. are you saying that if i use gpt4-1106-preview I won't be having this issue?

jhakulin commented 2 months ago

@sankethgadadinni Based on information I got, there can be issue that some models do not work as reliably at the moment. I cannot reproduce the issue with gpt4-1106-preview, that's right.

jhakulin commented 2 months ago

I got confirmed there is issue with gpt-4o-mini that is under investigation, however gpt-4o should work and for me that worked also OK. When you say "Whenever there are more than two concurrent calls", you mean when there are 2 assistant clients created to the same assistant and these 2 clients operates at the same time?

sankethgadadinni commented 2 months ago

@jhakulin Yeah, something like this. I'm implementing it asynchronously. Additionally, the assistant code above gets executed twice for each API call to generate the response.

import logging
import json
import pandas as pd
from typing import AsyncGenerator

from fastapi import APIRouter, HTTPException
from fastapi.responses import StreamingResponse

def get_suggestion(user_message):
    suggestion_response = run_conversation(Config.config_base_path + "/report_configs/","DashboardSuggestionAgent", user_message)
    print("suggestion", suggestion_response)
    return suggestion_response

def create_report(user_message, suggestions):
    coding_response = run_conversation(Config.config_base_path + "/report_configs/", "CodeProgrammerAgent", user_message)
    print("code", coding_response)
    return coding_response

router = APIRouter()

@router.post("/create")
async def create_report(payload):
    async def report_generator():
        try:
            logging.info("Received query: %s", payload.user_query)

            print("Request Recieved : ", payload)

            yield json.dumps({"message": "in progress.", "current_step": "plan generation", "response": "",  "completed": False}) + "\\n"

            suggestion_response = get_suggestion(payload.user_query)
            yield json.dumps({"message": " suggestion complete.", "current_step": "plan generation completed", "response": suggestion_response, "completed": False}) + "\\n"

            code_snippets_list = create_report(payload.user_query, suggestion_response)
            yield json.dumps({"message": "Code generated.", "current_step": "Code generated: Step 2 of 3 complete.", "response": code_snippets_list,  "completed": False}) + "\\n"

        except HTTPException as e:
            logging.error("HTTPException: %s", str(e))
            raise e
        except Exception as e:
            logging.exception("Exception occurred during report generation")
            raise HTTPException(status_code=500, detail="Error occurred in creating report")

    return StreamingResponse(report_generator(), media_type="application/json")
sankethgadadinni commented 2 months ago

@jhakulin anything on this issue?

jhakulin commented 2 months ago

@sankethgadadinni Does the problem still happen? atleast gpt-o-mini problem I mentioned earlier seems to be solved now on service side and I cannot reproduce the issue with that.

sankethgadadinni commented 2 months ago

@jhakulin I’m using gpt-4o. But It is not about model.

I have a flow where one assistant gets called to do something and then another assistant to do some other thing.

when I make more than two calls to this flow, they end up in errors.

In general, like the multiple orchestrator example in the repo can I call the assistants sequentially?

jhakulin commented 2 months ago

@sankethgadadinni I can see rate limit failures when running MultiAgentOrchestrator sample in Azure, so I wonder if you see something like that. Could you please set ASSISTANT_LOG_TO_CONSOLE env variable to get logs in the console and share those to me?

You can also plug into AssistantClientCallbacks and there on_run_failed callback which would tell your app if run failed for some error.

sankethgadadinni commented 2 months ago

@jhakulin You were right. It because of the rate limit error. Here are the logs recorded.

I have increased TPM from 40k to 150k. I think it should be fine now.

Would you like to proceed with extracting the relevant data and creating the dashboard components?
 in thread: thread_iDsnxfX1ssogcyxlJSNhKIJI, attachments: None, images: []
2024-08-19 15:30:52,798 - INFO - _process_messages_non_streaming - Creating a run for assistant: asst_3uhTJh54pAksDSePH1exlzBN and thread: thread_iDsnxfX1ssogcyxlJSNhKIJI
2024-08-19 15:30:53,139 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: requires_action
2024-08-19 15:30:53,139 - INFO - _handle_required_action - Handling required action
2024-08-19 15:30:53,139 - INFO - _handle_function_call - Handling function call: retrieve_file_content_from_directory with arguments: {"input_directory": "app/data", "filename": "data_dictionary.csv"}
2024-08-19 15:30:53,154 - INFO - _handle_function_call - Calling function: retrieve_file_content_from_directory with arguments: {'input_directory': 'app/data', 'filename': 'data_dictionary.csv'}
2024-08-19 15:30:53,156 - INFO - load_function_configs - Loading function specifications from C:\Users\SankethSiddannaGadad\.config\azure-ai-assistant
2024-08-19 15:30:53,158 - INFO - load_function_error_specs - Loading function error specifications from C:\Users\SankethSiddannaGadad\.config\azure-ai-assistant
2024-08-19 15:30:53,163 - INFO - load_function_error_specs - Loading function error specs from C:\Users\SankethSiddannaGadad\.config\azure-ai-assistant\function_error_specs.json
2024-08-19 15:30:53,164 - ERROR - load_function_error_specs - The 'C:\Users\SankethSiddannaGadad\.config\azure-ai-assistant\function_error_specs.json' file was not found.
2024-08-19 15:30:53,172 - INFO - _process_tool_calls - Function response: {"data_dictionary.csv": "Column Name,Description\nYear,The year when the data was recorded.Range: 2018 to 2023\nCountry,The country where the data was recorded.\nCategory,The primary category of the product.\nSub Category,The sub-category of the product.\nChannel,The sales channel through which the product was sold.\nBrand,The brand of the product.\nSKU,The stock keeping unit identifier for the product.\nRevenue,The revenue generated from the product sales\nVolume,\"The volume of the product sold, in liters.\""}
2024-08-19 15:30:54,775 - INFO - retrieve_conversation - Retrieved messages content: [Message(id='msg_WI0iurhRTapvLyFhi6dM0aHX', assistant_id=None, attachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[], value='\nDevelop business dashboard components including hierarchical filters, numerical or a string KPIs with proper units for numerical values, plotly charts and tables in response to the user\'s query, using the suggestions given context.\n\nUser Query: Read the \'data_dictionary.csv\' file located at the specified path below. Then, Create a business dashboard components using the user request.\nData dictionary path:app/data\\data_dictionary.csv. \nUser request: create a report on Volume\n\nContext: Based on the information in the data dictionary, here is a structured approach to creating dashboard components for a report on "Volume":\n\n### Hierarchical Filters\n1. Year\n2. Country\n3. Category\n4. Sub-Category\n5. Channel\n6. Brand\n\n### Key Performance Indicators (KPIs)\n1. **Total Volume Sold**: Sum of the `Volume` column (in liters)\n2. **Average Volume per Transaction**: Average of the `Volume` column (in liters)\n3. **Highest Volume Sold SKU**: SKU with the maximum volume\n4. **Top-Selling Brand**: Brand with the highest total volume sold\n\n### Charts\n1. **Bar Chart**: Total Volume Sold by Year\n   - X-axis: `Year`\n   - Y-axis: Total `Volume`\n   \n2. **Bar Chart**: Total Volume Sold by Country\n   - X-axis: `Country`\n   - Y-axis: Total `Volume`\n   \n3. **Bar Chart**: Total Volume Sold by Category\n   - X-axis: `Category`\n   - Y-axis: Total `Volume`\n\n### Table\n- A detailed table showing:\n  - `SKU`\n  - `Brand`\n  - `Category`\n  - `Sub Category`\n  - `Channel`\n  - `Volume`\n  - `Revenue`\n\nThese components will help in providing a comprehensive view of the volume-related data for the business.\n\nWould you like to proceed with extracting the relevant data and creating the dashboard components?\n'), type='text')], created_at=1724061651, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='user', run_id=None, status=None, thread_id='thread_iDsnxfX1ssogcyxlJSNhKIJI')]
2024-08-19 15:30:55,313 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: in_progress
2024-08-19 15:30:55,842 - INFO - _process_messages_non_streaming - Processing run: run_hdK21cWxFct9Czfd0CxTbHZU with status: in_progress
2024-08-19 15:30:56,377 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: in_progress
2024-08-19 15:30:56,902 - INFO - _process_messages_non_streaming - Processing run: run_hdK21cWxFct9Czfd0CxTbHZU with status: in_progress
2024-08-19 15:30:57,430 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: in_progress
2024-08-19 15:30:57,961 - INFO - _process_messages_non_streaming - Processing run: run_hdK21cWxFct9Czfd0CxTbHZU with status: in_progress
2024-08-19 15:30:58,482 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: in_progress
2024-08-19 15:30:59,057 - INFO - _process_messages_non_streaming - Processing run: run_hdK21cWxFct9Czfd0CxTbHZU with status: failed
2024-08-19 15:30:59,061 - WARNING - _process_messages_non_streaming - Processing run status: failed, error code: rate_limit_exceeded, error message: Rate limit is exceeded. Try again in 6 seconds.
2024-08-19 15:30:59,469 - INFO - _process_messages_non_streaming - Processing run: run_72Fey617NXkXLP3pUJVJx50H with status: failed
2024-08-19 15:30:59,472 - WARNING - _process_messages_non_streaming - Processing run status: failed, error code: rate_limit_exceeded, error message: Rate limit is exceeded. Try again in 5 seconds.
2024-08-19 15:30:59,521 - INFO - retrieve_conversation - Retrieved messages content: [Message(id='msg_WI0iurhRTapvLyFhi6dM0aHX', assistant_id=None, attachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[], value='\nDevelop business dashboard components including hierarchical filters, numerical or a string KPIs with proper units for numerical values, plotly charts and tables in response to the user\'s query, using the suggestions given context.\n\nUser Query: Read the \'data_dictionary.csv\' file located at the specified path below. Then, Create a business dashboard components using the user request.\nData dictionary path:app/data\\data_dictionary.csv. \nUser request: create a report on Volume\n\nContext: Based on the information in the data dictionary, here is a structured approach to creating dashboard components for a report on "Volume":\n\n### Hierarchical Filters\n1. Year\n2. Country\n3. Category\n4. Sub-Category\n5. Channel\n6. Brand\n\n### Key Performance Indicators (KPIs)\n1. **Total Volume Sold**: Sum of the `Volume` column (in liters)\n2. **Average Volume per Transaction**: Average of the `Volume` column (in liters)\n3. **Highest Volume Sold SKU**: SKU with the maximum volume\n4. **Top-Selling Brand**: Brand with the highest total volume sold\n\n### Charts\n1. **Bar Chart**: Total Volume Sold by Year\n   - X-axis: `Year`\n   - Y-axis: Total `Volume`\n   \n2. **Bar Chart**: Total Volume Sold by Country\n   - X-axis: `Country`\n   - Y-axis: Total `Volume`\n   \n3. **Bar Chart**: Total Volume Sold by Category\n   - X-axis: `Category`\n   - Y-axis: Total `Volume`\n\n### Table\n- A detailed table showing:\n  - `SKU`\n  - `Brand`\n  - `Category`\n  - `Sub Category`\n  - `Channel`\n  - `Volume`\n  - `Revenue`\n\nThese components will help in providing a comprehensive view of the volume-related data for the business.\n\nWould you like to proceed with extracting the relevant data and creating the dashboard components?\n'), type='text')], created_at=1724061651, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='user', run_id=None, status=None, thread_id='thread_iDsnxfX1ssogcyxlJSNhKIJI')]
2024-08-19 15:30:59,919 - INFO - retrieve_conversation - Retrieved messages content: [Message(id='msg_QnMXKoWaEiIHGQDdTW0jY6R4', assistant_id=None, attachments=[], completed_at=None, content=[TextContentBlock(text=Text(annotations=[], value="\nDevelop business dashboard components including hierarchical filters, numerical or a string KPIs with proper units for numerical values, plotly charts and tables in response to the user's query, using the suggestions given context.\n\nUser Query: Read the 'data_dictionary.csv' file located at the specified path below. Then, Create a business dashboard components using the user request.\nData dictionary path:app/data\\data_dictionary.csv. \nUser request: create a report on Volume\n\nContext: Based on the provided data dictionary and the request to create a report on Volume, here are the suggested components for the business dashboard:\n\n**Hierarchical Filters:**\n1. Year\n2. Country\n3. Category\n4. Sub Category\n5. Channel\n6. Brand\n\n**KPIs:**\n1. Total Volume Sold:  {sum(Volume)} liters\n2. Average Volume per SKU: {average(Volume)} liters\n3. Highest Volume Record: {max(Volume)} liters\n4. Volume as a String: {anyStringRepresentationOf(Volume)}\n\n**Charts:**\n1. Bar Chart: Volume Sold by Year\n2. Bar Chart: Volume Sold by Country\n3. Pie Chart: Volume Sold by Channel\n\n**Table:**\n- Columns: Year, Country, Category, Sub Category, Channel, Brand, SKU, Volume\n\nThese components will provide deep insights into the volume data based on the specified filters.\n"), type='text')], created_at=1724061650, incomplete_at=None, incomplete_details=None, metadata={}, object='thread.message', role='user', run_id=None, status=None, thread_id='thread_XyszXwfCF8GxFAVdWDiyED4d')]
jhakulin commented 2 months ago

Thank you @sankethgadadinni, closing the issue