Open thisismygitrepo opened 8 months ago
Which version of pyautogen are you using? I just tested this with the latest pyautogen==0.2.6
and didn't see any error.
@rickyloynd-microsoft Same, 0.2.6
I'm trying all possible llm_config
to no avail.
Can you paste your OAI_CONFIG_LIST file here, after deleting the api_key strings?
llm_config = {'timeout': 600,
'request_timeout': 600,
'cache_seed': 42,
'seed': 42,
'config_list': [{'api_key': 'sk-blah',
'api_type': 'openai'}],
'temperature': 0}
I tried with azure subscription of my workplace but no luck.
I hope that this is just as good as OAI_CONFIG_LIST
Some of the keys are invalid now with openai>=1.0. request_timeout
should be removed. It's replaced by timeout
. Same with seed
, which was replaced by cache_seed
. These details are from the migration guide. See if that helps.
@rickyloynd-microsoft I removed all the options, its now
llm_config = {
'config_list': [{'api_key': 'sk-blah',
'api_type': 'openai'}],
}
But no luck, the exact same error.
Did you modify simple_chat.py in any way, other than llm_config?
What's your openai version? openai==1.6.1 should work.
openai
says its 1.7.1
pyautogen
and I can confirm that my code is exactly matching the example, concretly:import autogen
from autogen import UserProxyAgent, ConversableAgent
config_list = [{'api_key': 'sk-blah', 'api_type': 'openai'}]
def main():
# Load LLM inference endpoints from an env variable or a file
# See https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample.
# For example, if you have created a OAI_CONFIG_LIST file in the current working directory, that file will be used.
# Create the agent that uses the LLM.
assistant = ConversableAgent("agent", llm_config={"config_list": config_list})
# Create the agent that represents the user in the conversation.
user_proxy = UserProxyAgent("user", code_execution_config=False)
# Let the assistant start the conversation. It will end when the user types exit.
assistant.initiate_chat(user_proxy, message="How can I help you today?")
main()
Result: same error.
1.6.1
, same error.You need to add model
key to the config entry. We don't have default model now. #1032
@ekzhu
Thanks, finally, I got to the buttom of it.
But for your info, I hacked it just to do what you said, becaue without the hack the constructor refuses the model argument you mentioned and throws an error.
I had to dynamically add model later on. That was not a smooth start with autogen lol.
@ekzhu I'm wondering, why does simple_chat.py work for me, with no model specified, using the latest pyautogen==0.2.6? Oh I see, it's the model key in llm_config. I had that in mine, but didn't notice that @thisismygitrepo was missing it. We need a clearer error message for a missing model key.
@thisismygitrepo Instead of trying to pass the model to get_config_list
, or dynamically adding it later, you just add a model item to your config_list like this:
config_list = [{"model": "gpt-4", 'api_key': 'sk-blah'}]
Or, if you are using a config list from a file (as the unmodified simple_chat.py does), then your OAI_CONFIG_LIST entries should all contain model items, as in OAI_CONFIG_LIST_sample
.
@rickyloynd-microsoft Yep. I'm going ahead with that. I just thought that get_config_list
constructor was useful because it provides gaurdrails about what is expected and what is not.
So I guess this issue can be reduced to fixing get_config_list
arguments. I'm happy for this issue to be closed.
This is a great first issue. We need to have a better error message when the model
key is not present.
so I see in the error message it mentions "TypeError: Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given"
Are we trying to handle it more gracefully from autogen side?
If so it would be the oai client file here rite? https://github.com/microsoft/autogen/blob/autogenstudio/autogen/oai/client.py#L82
So goal is add validation logic to config_list? Happy to help just want to make sure I got the need correctly.
Thanks @zbram101 ! I believe you are mostly correct. Although we have some special handling for Azure Open AI, which I believe doesn't require a model parameter. You probably want to make sure it works in the following scenarios:
For the above scenarios, I think the model
key is only required for (1) and (3). But don't take my word for it 😆
See the code here: https://github.com/microsoft/autogen/blob/autogenstudio/autogen/oai/client.py#L211
Also make sure to follow the updates on this one: https://github.com/microsoft/autogen/pull/1232.
You can make @sonichi your reviewer as he has the most knowledge about the client wrapper.
I had the same error that the argument 'model' was missing in my llm_config.
What would have helped me is a clear and bold statement that an agent is missing the model. The current stack trace and message was hinting me to a missing 'model' argument, but I thought it was a incompatibility between openai and autogen or pyautogen.
Something like "Agent XYZ is missing an argument 'model'. Please check if a 'model' is supplied in your llm_config" would be awesome in my opinion.
@janikdotzel Sure on it!
@ekzhu I would contend this check should in init rather than create. I think create is when we run the chat and since this a configurations issues it should be caught early on like in init. @sonichi thoughts? I got a combined functions to validate the three below formats of submissions.
{ "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KEY"), "api_type": "azure", "base_url": os.environ.get("AZURE_OPENAI_API_BASE"), "api_version": "2023-03-15-preview", }, { "model": "gpt-3.5-turbo", "api_key": os.environ.get("OPENAI_API_KEY"), "api_type": "open_ai", "base_url": "https://api.openai.com/v1", }, { "model": "llama-7B", "base_url": "http://127.0.0.1:8080", "api_type": "open_ai", }
Although, im not a big fan of the llama-7B saying api_type as open_ai. should be introduce a "custom" api_type for model other than azure or open_ai. Let me know, below is a snippet for part of the check.
if api_type == "azure": required_keys = {"model", "api_key", "base_url", "api_version"} elif api_type == "open_ai": required_keys = {"model", "api_key"} elif api_type == "custom": required_keys = {"model", "base_url"} else: raise ValueError(f"Invalid api_type '{api_type}' in configuration at index {idx}.")
I think this is going in the right direction. Though I would save the custom api_type for a future PR as it requires a bigger change.
It's tricker than that. OpenAI endpoints don't require model to construct a client and can pass model in create
. Azure OpenAI endpoints do require a "deployment_name" or "model" (which will be processed internally) when constructing a client. #1232 will need to be done to finish the full support for Azure OpenAI.
I'd suggest doing the check in ConversableAgent for now instead of in client.py.
llm_config={"config_list": [ { "model": "gpt-4", "api_key": "sk-" }, { "model": "gpt-3.5-turbo", "api_key": "sk-", "api_version": "2023-03-01-preview" } ]} I am still getting this error, This llm_config seems correct as per the discussion, is it not? I am using OPENAI_API. pyautogen==0.2.6
User_proxy (to chat_manager):
sum 2 numbers in Python
--------------------------------------------------------------------------------
Messages: [{'content': 'sum 2 numbers in Python', 'role': 'user', 'name': 'User_proxy'}, {'role': 'system', 'content': "Read the above conversation. Then select the next role from ['User_proxy', 'coder', 'Senior Coder'] to play. Only return the role."}]
Sender: None
Config: None
Client <autogen.oai.client.OpenAIWrapper object at 0x000001954D04C040>
Messages: [{'content': 'sum 2 numbers in Python', 'name': 'User_proxy', 'role': 'user'}]
Sender: <autogen.agentchat.groupchat.GroupChatManager object at 0x000001954C05D780>
Config: None
Client <autogen.oai.client.OpenAIWrapper object at 0x000001954C00EEF0>
Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given
Traceback (most recent call last):
File "E:\actualautogen\venv\lib\site-packages\nicegui\events.py", line 406, in wait_for_result
await result
File "e:\actualautogen\test_autogen.py", line 128, in send
last_processed_msg_index = await process_chat_interaction(manager, user_message, messages, groupchat, last_processed_msg_index)
File "e:\actualautogen\test_autogen.py", line 103, in process_chat_interaction
await user_proxy.a_initiate_chat(manager, message=user_message)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 644, in a_initiate_chat
await self.a_send(self.generate_init_message(**context), recipient, silent=silent)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 447, in a_send
await recipient.a_receive(message, self, request_reply, silent)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 588, in a_receive
reply = await self.a_generate_reply(sender=sender)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 1252, in a_generate_reply
final, reply = await reply_func(
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\groupchat.py", line 425, in a_run_chat
reply = await speaker.a_generate_reply(sender=self)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 1252, in a_generate_reply
final, reply = await reply_func(
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 737, in a_generate_oai_reply
return await asyncio.get_event_loop().run_in_executor(
File "C:\Users\Lenovo\AppData\Local\Programs\Python\Python310\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "E:\actualautogen\venv\lib\site-packages\autogen\agentchat\conversable_agent.py", line 712, in generate_oai_reply
response = client.create(
File "E:\actualautogen\venv\lib\site-packages\autogen\oai\client.py", line 278, in create
response = self._completions_create(client, params)
File "E:\actualautogen\venv\lib\site-packages\autogen\oai\client.py", line 543, in _completions_create
response = completions.create(**params)
File "E:\actualautogen\venv\lib\site-packages\openai\_utils\_utils.py", line 270, in wrapper
raise TypeError(msg)
TypeError: Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream')
arguments to be given
@abhishekrai43
Some agent may not get proper llm_config
. Please check your code or share it if you'd like others to reproduce it.
Also, I'd remove {"api_version": "2023-03-01-preview"}.
@sonichi Some agent may not get proper llm_config , This was it. Thanks a lot for the pointer
@abhishekrai43 Here is the config that worked for my after 100's of trials ..
config_list = autogen.get_config_list(api_keys=[Read.ini(P.home().joinpath("dotfiles/creds/tokens/openai.ini"))['tokens']['alsaffar']],
base_urls=None, api_type=None, api_version=None)
config_list[0]["model"] = "gpt-3.5-turbo-16k"
config_list = autogen.get_config_list(api_keys=[Read.ini(P.home().joinpath("dotfiles/creds/tokens/openai.ini"))['tokens']['cae14']],
api_type='azure', api_version='2023-12-01-preview', base_urls=["https://blah"])
config_list[0]["model"] = "gpt-35-turbo-16k-default" # by that, they mean deployment_name
llm_config = {
"timeout": 60,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
Is the @thisismygitrepo post above the way to properly get the config list? Seems very convoluted.
I'm encountering the same issue running pyautogen 0.2.7.
Here is my code:
import autogen
import os
os.environ["OPENAI_API_KEY"] = <OPENAI_KEY>
config_list = [
{
'model': 'gpt-4',
'api_key': <OPENAI_KEY>,
}
]
# create an AssistantAgent named "assistant"
assistant = autogen.AssistantAgent(
name="assistant",
llm_config={
"cache_seed": 100, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0, # temperature for sampling
}, # configuration for autogen's enhanced inference API which is compatible with OpenAI API
)
user_proxy = autogen.UserProxyAgent(
name="Admin",
system_message="A human admin. Interact with the planner to discuss the plan. Plan execution needs to be approved by this admin.",
code_execution_config=False,
)
engineer = autogen.AssistantAgent(
name="Engineer",
llm_config=config_list,
human_input_mode="NEVER",
system_message="""Engineer. You follow an approved plan. You write python/shell code to solve tasks. Wrap the code in a code block that specifies the script type. The user can't modify your code. So do not suggest incomplete code which requires others to modify. Don't use a code block if it's not intended to be executed by the executor.
Don't include multiple code blocks in one response. Do not ask others to copy and paste the result. Check the execution result returned by the executor.
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.
""",
)
executor = autogen.UserProxyAgent(
name="Executor",
system_message="Executor. Execute the code written by the engineer and report the result.",
human_input_mode="NEVER",
code_execution_config={"last_n_messages": 3, "work_dir": "paper"},
)
debugger = autogen.AssistantAgent(
name="Debugger",
system_message="Your job is to debug the code and fix it if it doesn't runs.",
human_input_mode="NEVER",
llm_config=config_list,
)
groupchat = autogen.GroupChat(
agents=[user_proxy, engineer, executor, debugger], messages=[], max_round=50
)
manager = autogen.GroupChatManager(groupchat=groupchat)
user_proxy.initiate_chat(
manager,
message="""Test directions"""
)
Note messsage in initiate_chat
can be anything as it errors out prior to getting to that message.
@matsuobasho it is not very well documented but in this case you would need an llm_config for the groupchat manager.
I'm experiencing basically the same error when I try to load claude with the AnthropicClient. I'm using pyautogen==0.2.22
Code:
# Open the file and read the API key
with open(file_path_2, 'r') as file:
api_key_2 = file.read().strip()
# Set an environment variable
os.environ['ANTHROPIC_API_KEY'] = api_key_2
# Implementation of AnthropicClient
class AnthropicClient(ModelClient):
def __init__(self, config: Dict[str, Any]):
self._config = config
self.model = config["model"]
anthropic_kwargs = set(inspect.getfullargspec(Anthropic.__init__).kwonlyargs)
filter_dict = {k: v for k, v in config.items() if k in anthropic_kwargs}
self._client = Anthropic(**filter_dict)
def message_retrieval(self, response: Message) -> Union[List[str], List]:
choices = response.content
if isinstance(response, Message):
return [choice.text for choice in choices] # type: ignore [union-attr]
def create(self, params: Dict[str, Any]) -> Completion:
if "messages" in params:
raw_contents = params["messages"]
if raw_contents[0]["role"] == "system":
raw_contents = raw_contents[1:]
params["messages"] = raw_contents
completions: Completion = self._client.messages # type: ignore [attr-defined]
else:
completions: Completion = self._client.completions
# Not yet support stream
params = params.copy()
params["stream"] = False
params.pop("model_client_cls")
response = completions.create(**params)
return response
def cost(self, response: Completion) -> float:
total = 0.0
tokens = {
"input": response.usage.input_tokens if response.usage is not None else 0,
"output": response.usage.output_tokens if response.usage is not None else 0,
}
price_per_million = {
"input": 15,
"output": 75,
}
for key, value in tokens.items():
total += value * price_per_million[key] / 1_000_000
return total
@staticmethod
def get_usage(response: Completion) -> Dict:
return {
"prompt_tokens": response.usage.input_tokens if response.usage is not None else 0,
"completion_tokens": response.usage.output_tokens if response.usage is not None else 0,
"total_tokens": (
response.usage.input_tokens + response.usage.output_tokens if response.usage is not None else 0
),
"cost": response.cost if hasattr(response, "cost") else 0,
"model": response.model,
}
# LLM configuration for the agent
config_list = [
{
"model": "claude-3-opus-20240229",
"model_client_cls": "AnthropicClient",
"base_url": "https://api.anthropic.com",
"api_key": os.getenv("ANTHROPIC_API_KEY"),
"api_type": "anthropic"
}
]
# Convert the configuration list to a JSON string
config_json = json.dumps(config_list)
# Assign the JSON string to the 'OAI_CONFIG_LIST' environment variable
os.environ['OAI_CONFIG_LIST'] = config_json
# Create the claude conversable agent
claude = ConversableAgent(
"claude",
system_message="You are an expert specializing in content and document generation. You have been given a search engine tool to perform web-scale search and coding capability to solve tasks using Python code. You are responsible for writing the code and executing it. You have also been given a simple calculator to perform simple calculations to solve tasks. Reply TERMINATE when the task is done.",
llm_config={"functions": [generate_llm_config(search_tool)], "config_list": config_list, "timeout": 120},
code_execution_config={"executor": executor},
function_map=None,
human_input_mode="NEVER", # Alternatively, "ALWAYS" or "TERMINATE"
)
# Register the tool signature with claude
claude.register_for_llm(name="calculator", description="A simple calculator")(calculator)
# Register the search engine tool function with the agent
claude.register_function(
function_map={
search_tool.name: search_tool._run,
}
)
# Register the anthropic client with the agent
claude.register_model_client(model_client_cls=AnthropicClient)
Error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[22], line 24
21 def react_prompt_message(sender, recipient, context):
22 return ReAct_prompt.format(input=context["question"])
---> 24 gpt.initiate_chat(
25 claude,
26 message=react_prompt_message,
27 question="What is the result of the 2024 super bowl?",
28 )
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:990, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
988 else:
989 msg2send = self.generate_init_message(message, **kwargs)
--> 990 self.send(msg2send, recipient, silent=silent)
991 summary = self._summarize_chat(
992 summary_method,
993 summary_args,
994 recipient,
995 cache=cache,
996 )
997 for agent in [self, recipient]:
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:631, in ConversableAgent.send(self, message, recipient, request_reply, silent)
629 valid = self._append_oai_message(message, "assistant", recipient)
630 if valid:
--> 631 recipient.receive(message, self, request_reply, silent)
632 else:
633 raise ValueError(
634 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
635 )
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:791, in ConversableAgent.receive(self, message, sender, request_reply, silent)
789 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
790 return
--> 791 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
792 if reply is not None:
793 self.send(reply, sender, silent=silent)
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:1912, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
1910 continue
1911 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 1912 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
1913 if final:
1914 return reply
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:1278, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
1276 if messages is None:
1277 messages = self._oai_messages[sender]
-> 1278 extracted_response = self._generate_oai_reply_from_client(
1279 client, self._oai_system_message + messages, self.client_cache
1280 )
1281 return (False, None) if extracted_response is None else (True, extracted_response)
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\agentchat\conversable_agent.py:1297, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
1294 all_messages.append(message)
1296 # TODO: #1143 handle token limit exceeded error
-> 1297 response = llm_client.create(
1298 context=messages[-1].pop("context", None),
1299 messages=all_messages,
1300 cache=cache,
1301 )
1302 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
1304 if extracted_response is None:
File ~\Anaconda3\envs\agents\Lib\site-packages\autogen\oai\client.py:627, in OpenAIWrapper.create(self, **config)
625 try:
626 request_ts = get_current_ts()
--> 627 response = client.create(params)
628 except APITimeoutError as err:
629 logger.debug(f"config {i} timed out", exc_info=True)
Cell In[20], line 36, in AnthropicClient.create(self, params)
34 params["stream"] = False
35 params.pop("model_client_cls")
---> 36 response = completions.create(**params)
38 return response
File ~\Anaconda3\envs\agents\Lib\site-packages\anthropic\_utils\_utils.py:274, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
272 else:
273 msg = f"Missing required argument: {quote(missing[0])}"
--> 274 raise TypeError(msg)
275 return func(*args, **kwargs)
TypeError: Missing required arguments; Expected either ('max_tokens', 'messages' and 'model') or ('max_tokens', 'messages', 'model' and 'stream') arguments to be given
j ai le probleme voici mon code :
import os
from postgres_da_ai_agent.modules.db import PostgresManager
from postgres_da_ai_agent.modules import llm
import dotenv
import argparse
import autogen
import openai
dotenv.load_dotenv()
assert os.environ.get("DATABASE_URL"), "POSTGRES_CONNECTION_URL not found in .env file" assert os.environ.get("OPENAI_API_KEY"), "OPENAI_API_KEY not found in .env file"
DB_URL = os.environ.get("DATABASE_URL") OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
POSTGRES_TABLE_DEFINITIONS_CAP_REF = "TABLE_DEFINITIONS" RESPONSE_FORMAT_CAP_REF = "RESPONSE_FORMAT" SQL_DELIMITER = "---------"
def main(): parser = argparse.ArgumentParser() parser.add_argument("--prompt", help="The prompt for the AI") args = parser.parse_args()
if not args.prompt:
print("Please provide a prompt")
return
prompt = f"Fulfill this database query: {args.prompt}."
with PostgresManager() as db:
db.connect_with_url(DB_URL)
table_definitions = db.get_table_definitions_for_prompt()
prompt = llm.add_cap_ref(
prompt,
f"Use these {POSTGRES_TABLE_DEFINITIONS_CAP_REF} to satisfy the database query.",
POSTGRES_TABLE_DEFINITIONS_CAP_REF,
table_definitions,
)
# Configuration GPT-4 sans fonctions
gpt4_config_no_functions = {
"use_cache": False,
"temperature": 0,
"config_list": autogen.config_list_from_models(["gpt-4"]),
"request_timeout": 120,
}
# Configuration GPT-4 avec fonctions
gpt4_config = {
"use_cache": False,
"temperature": 0,
"config_list": autogen.config_list_from_models(["gpt-4"]),
"request_timeout": 120,
"functions": [
{
"name": "run_sql",
"description": "Run a SQL query against the postgres database",
"parameters": {
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "The SQL query to run",
},
},
"required": ["sql"],
},
},
],
}
# Cartographie des fonctions
function_map = {
"run_sql": db.run_sql,
}
# Fonction pour détecter les messages de terminaison
def is_termination_msg(content):
have_content = content.get("content", None) is not None
if have_content and "APPROVED" in content["content"]:
return True
return False
COMPLETION_PROMPT = "If everything looks good, respond with APPROVED"
USER_PROXY_PROMPT = (
"A human admin. Interact with the Product Manager to discuss the plan. Plan execution needs to be approved by this admin."
+ COMPLETION_PROMPT
)
DATA_ENGINEER_PROMPT = (
"A Data Engineer. You follow an approved plan. Generate the initial SQL based on the requirements provided. Send it to the Sr Data Analyst to be executed."
+ COMPLETION_PROMPT
)
SR_DATA_ANALYST_PROMPT = (
"Sr Data Analyst. You follow an approved plan. You run the SQL query, generate the response, and send it to the Product Manager for final review."
+ COMPLETION_PROMPT
)
PRODUCT_MANAGER_PROMPT = (
"Product Manager. Validate the response to make sure it's correct."
+ COMPLETION_PROMPT
)
# Création des agents avec des rôles spécifiques
user_proxy = autogen.UserProxyAgent(
name="Admin",
system_message=USER_PROXY_PROMPT,
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=is_termination_msg,
)
data_engineer = autogen.AssistantAgent(
name="Engineer",
llm_config=gpt4_config,
system_message=DATA_ENGINEER_PROMPT,
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=is_termination_msg,
)
sr_data_analyst = autogen.AssistantAgent(
name="Sr_Data_Analyst",
llm_config=gpt4_config,
system_message=SR_DATA_ANALYST_PROMPT,
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=is_termination_msg,
function_map=function_map,
)
product_manager = autogen.AssistantAgent(
name="Product_Manager",
llm_config=gpt4_config,
system_message=PRODUCT_MANAGER_PROMPT,
code_execution_config=False,
human_input_mode="NEVER",
is_termination_msg=is_termination_msg,
)
# Création du groupchat et initialisation
groupchat = autogen.GroupChat(
agents=[user_proxy, data_engineer, sr_data_analyst, product_manager],
messages=[],
max_round=10,
)
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=gpt4_config_no_functions,
)
user_proxy.initiate_chat(manager, clear_history=True, message=prompt)
if name == "main": main()
You need to use the new Chat Completions Endpoint here It might be helpful.
Describe the bug
In all examples I try to run, I get the same error. Concretely, simple chat example:
https://github.com/microsoft/autogen/blob/main/samples/simple_chat.py
Steps to reproduce
No response
Expected Behavior
No response
Screenshots and logs
No response
Additional Information
No response