microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
33.79k stars 4.88k forks source link

[Bug]: In Groupchat, existing agents based on OpenAI assistant API throw error #1197

Closed DementedWeasel1971 closed 6 months ago

DementedWeasel1971 commented 10 months ago

Describe the bug

When assistant already exists and is instantiated using code such as:

# Configure Agents

#Technical Analysis Critic
assistant_id = os.environ.get("ASSISTANT_ID", "asst_rECmBeeNUz43ZQ3H2JUycUdk")
llm_config = {
    "config_list": config_list,
    "assistant_id": assistant_id,
#    "tools": [{"type": "retrieval"},{"type": "code_interpreter"}],
#    "file_ids": ["file-LHgfBxjo3D8IQPTseZOrQKwA"]
    # add id of an existing file in your openai account
    # in this case I added the implementation of conversable_agent.py
}

gpt_technical_analysis_critic = GPTAssistantAgent(
    name="Technical Analysis Critic", description="Technical Analysis Critic is a specialist at stock trading methodologies and best practice.", llm_config=llm_config
)

Then in GroupChat the following error is shown:

TypeError: Completions.create() got an unexpected keyword argument 'assistant_id'

This only happens when I want to re-use agents that already exists and is called via the following example code:

# Group Solve
import autogen

# define group chat
groupchat = autogen.GroupChat(agents=[user_proxy, gpt_system_project_planner, gpt_technical_analysis_critic, gpt_solution_architect], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(
    manager,
    message="Get the number of issues and pull requests for the repository 'microsoft/autogen' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.",
)
# type exit to terminate the chat

It seems to be specific to when the existing agent is referenced.

Steps to reproduce

import logging
import os

import autogen
from autogen.agentchat import AssistantAgent
from autogen import UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent

logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)

config_list = config_list_from_json("OAI_CONFIG_LIST")

# Configure Agents

#Technical Analysis Critic
assistant_id = os.environ.get("ASSISTANT_ID", "asst_rECmBeeNUz43ZQ3H2JUycUdk")
llm_config = {
    "config_list": config_list,
    "assistant_id": assistant_id,
#    "tools": [{"type": "retrieval"},{"type": "code_interpreter"}],
#    "file_ids": ["file-LHgfBxjo3D8IQPTseZOrQKwA"]
    # add id of an existing file in your openai account
    # in this case I added the implementation of conversable_agent.py
}

gpt_technical_analysis_critic = GPTAssistantAgent(
    name="Technical Analysis Critic", description="Technical Analysis Critic is a specialist at stock trading methodologies and best practice.", llm_config=llm_config
)
# Configure Agents

#System Project Planner
assistant_id = os.environ.get("ASSISTANT_ID", "asst_rE85ECt63wNZOGK7i11cTcLW")
llm_config = {
    "config_list": config_list,
    "assistant_id": assistant_id,
#    "tools": [{"type": "retrieval"},{"type": "code_interpreter"}],
#    "file_ids": ["file-LHgfBxjo3D8IQPTseZOrQKwA"]
    # add id of an existing file in your openai account
    # in this case I added the implementation of conversable_agent.py
}

description="System Project Planner creates a project delivery plan and is the first resource to engage in thinking through the delivery process step by step.  System Project Planner provides step-by-step from project Definition, through to Implementation."

gpt_system_project_planner = GPTAssistantAgent(
    name="System Project Planner", description=description, llm_config=llm_config
)
# Configure Agents

#Solution Architect
assistant_id = os.environ.get("ASSISTANT_ID", "asst_4FSUH7tvkNcHIFzV4J4pGbgV")
llm_config = {
    "config_list": config_list,
    "assistant_id": assistant_id,
#    "tools": [{"type": "retrieval"},{"type": "code_interpreter"}],
#    "file_ids": ["file-LHgfBxjo3D8IQPTseZOrQKwA"]
    # add id of an existing file in your openai account
    # in this case I added the implementation of conversable_agent.py
}

gpt_solution_architect = GPTAssistantAgent(
    name="Solution Architect", description="Solution Architect provides solution designs using plantuml syntax.  The Solution Architect do not provide code.  The Solution Architect do review code to ensure it is aligned with sound architecture principles.", llm_config=llm_config
)

user_proxy = UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
    code_execution_config={
        "work_dir": "coding",
        "use_docker": False,  # set to True or image name like "python:3" to use docker
    },
    human_input_mode="NEVER",
)
# define group chat
groupchat = autogen.GroupChat(agents=[user_proxy, gpt_system_project_planner, gpt_technical_analysis_critic, gpt_solution_architect], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

user_proxy.initiate_chat(
    manager,
    message="Get the number of issues and pull requests for the repository 'microsoft/autogen' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.",
)
# type exit to terminate the chat

Expected Behavior

Chat should have started as is currently working with group chat or agents which is created as part of the process.

Screenshots and logs

No response

Additional Information

Win 10 Pro Current version Python 9.10

rickyloynd-microsoft commented 10 months ago

@afourney

ekzhu commented 10 months ago

Looks like an issue related to support for assistant API. @gagb @sidhujag

sonichi commented 10 months ago

@IANTHEREAL fyi

IANTHEREAL commented 10 months ago

The problem is caused by the llm_config in manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config), you can try to remove assistant_id from chat manager's configuration

DementedWeasel1971 commented 10 months ago

If I follow your advice and remove the assistant_id, how do I activate an already existing agent in my openai account. The assistant_id is used as reference to an already configured agent that I have in openai. It works if I have a single chat between agents. I have full use of code interpreter as well as any knowledge or functions that I have on the openai platform. So I can configure and tweak an agent once and feed it with specific knowledge once and then call it.

This works like a charm, except for the group chat. If it worked as well in the group chat as it is working on the single chat, it opens up huge potential, as not only do I have custom agents, I have remote compute (code interpreters for each agent), without needing to config this at "run time". The issue is specific to group chat, works fantastic on everything else, but I do not want to use networkx to orchastrate engagement between agents, group chat on autogen works brilliant on agents created locally. You might think of a code change in the group chat or llm_config to add the agent names, as I suspect that is what you are using.

If this one works or I find a workarround, for me this would be a huge thing. Group Chats between existing super agents.

If you add to this a later feature which is to manage the thread allocation, autogen would be on its way to enable building a "wisdom enabler". (But I do not want to digress), I would want to have existing agents enabled in a group chat while they have access to all their knowledge, functions and preset instructions. The way it works in a 1:1 chat with an existing agent. But in the group chat.

If I remove the assistant_id I remove the ability to reference an existing agent, or is there another way to reference, if so, I will gladly test.

IANTHEREAL commented 10 months ago

I'm a bit confused, have you encountered a new error? The previous issue was in the chat manager's function, not the GPT assistant agent. You just need to remove the assistant id from the chat manager's llm config. The rest of the GPT assistant agent's configuration, including the assistant id, can remain unchanged. chat manager doesn't need assistant id, right? @DementedWeasel1971

DementedWeasel1971 commented 10 months ago

Confusion might be on my side. @IANTHEREAL when you said remove assistant id from the chat manager's llm config, I took it to imply that it should not be there to start off with. I will test and revert.

sidhujag commented 10 months ago

I hit this and found if i pop the assistant_id in the gpt assistant it fixes it.

DementedWeasel1971 commented 10 months ago

I hit this and found if i pop the assistant_id in the gpt assistant it fixes it.

Please share code. Would love to find the work around. If I can get the existing agents to work in a group chat, Wow, then each's specialisation can be used to in theory create a better quality output or even output in the context of a company's existing source code, which I could have uploaded as knowledge.

gagb commented 9 months ago

@DementedWeasel1971 you need to use different llm_config for group chat manager and gpt assistant. The former does not need assistant id and will complain. And the latter can use one. Hope this helps!