Open OumarDicko opened 9 months ago
@LinxinS97 fyi
You need to provide a correct path or environment for OAI_CONFIG_LIST with the following information:
// Please modify the content, remove these two lines of comment and rename this file to OAI_CONFIG_LIST to run the sample code.
// if using pyautogen v0.1.x with Azure OpenAI, please replace "base_url" with "api_base" (line 11 and line 18 below). Use "pip list" to check version of pyautogen installed.
[
{
"model": "gpt-4",
"api_key": "<your OpenAI API key here>"
},
{
"model": "<your Azure OpenAI deployment name>",
"api_key": "<your Azure OpenAI API key here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
},
{
"model": "<your Azure OpenAI deployment name>",
"api_key": "<your Azure OpenAI API key here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
}
...
]
Also, I highly recommend you use gpt-4
as the builder_model
.
The problem you're encountering stems from a discrepancy in your configurations: you're using gpt-4-1106-preview
in your config_list
, but have gpt-3.5-turbo
set in AgentBuild
. Aligning both to the same model configuration should resolve the issue.
I think we should support pass in a config_list
to the AgentBuild. In some cases, LLM settings can only be set with environment variables rather than a config file.
I think we should support pass in a
config_list
to the AgentBuild. In some cases, LLM settings can only be set with environment variables rather than a config file.
AgentBuilder supports environment with a JSON format string, you can check autogen.config_list_from_json
for more details. I think the name of the argument config_path
is not good enough but I'm not sure if it should be changed.
I think we should support pass in a
config_list
to the AgentBuild. In some cases, LLM settings can only be set with environment variables rather than a config file.AgentBuilder supports environment with a JSON format string, you can check
autogen.config_list_from_json
for more details. I think the name of the argumentconfig_path
is not good enough but I'm not sure if it should be changed.
Thanks @LinxinS97 , a better way could be to accept llm_config in AgentBuilder, like all the other agents did. Then users can load their config_list with any method they want: config_list_from_json
, config_list_from_dotenv
, etc.
I think we should support pass in a
config_list
to the AgentBuild. In some cases, LLM settings can only be set with environment variables rather than a config file.AgentBuilder supports environment with a JSON format string, you can check
autogen.config_list_from_json
for more details. I think the name of the argumentconfig_path
is not good enough but I'm not sure if it should be changed.Thanks @LinxinS97 , a better way could be to accept llm_config in AgentBuilder, like all the other agents did. Then users can load their config_list with any method they want:
config_list_from_json
,config_list_from_dotenv
, etc.
Agreed. I've only started checking out AutoGen in the past few days, but this way of loading config seems out of sync with everything else I've come across so far.
when i try to run autobuilder i have this error, i'm not a pro i just copy past autogencode : Traceback (most recent call last): File "C:\Users\hp\Desktop\CODE\autogen\autogen\agentbuilder.py", line 11, in
agent_list, agent_configs = builder.build(building_task, default_llm_config)
File "C:\Users\hp\Desktop\CODE\autogen\autogen\autogen\agentchat\contrib\agent_builder.py", line 286, in build
build_manager.create(
File "C:\Users\hp\Desktop\CODE\autogen\autogen\autogen\oai\client.py", line 250, in create
response = self._completions_create(client, params)
File "C:\Users\hp\Desktop\CODE\autogen\autogen\autogen\oai\client.py", line 336, in _completions_create
response = completions.create(**params)
File "C:\Users\hp\Desktop\CODE\autogen.venv\lib\site-packages\openai_utils_utils.py", line 298, in wrapper
raise TypeError(msg)
TypeError: Missing required arguments; Expected either ('messages' and 'model') or ('messages', 'model' and 'stream') arguments to be given
Here is my code 👍 `import autogen from autogen.agentchat.contrib.agent_builder import AgentBuilder
config_path = "autogen\OAI_CONFIG_LIST.json" default_llm_config = { 'temperature': 0 } builder = AgentBuilder(config_path=config_path, builder_model='gpt-3.5-turbo', agent_model='gpt-3.5-turbo') building_task = "Find Paper Text To Speech and summirize it and make a graph to score it base on criteria you will choose"
agent_list, agent_configs = builder.build(building_task, default_llm_config)
def start_task(execution_task: str, agent_list: list, llm_config: dict): config_list = autogen.config_list_from_json(config_path, filter_dict={"model": ["gpt-4-1106-preview"]})
start_task( execution_task="Find a recent paper about Text To Speech on arxiv and find its potential applications in software.", agent_list=agent_list, llm_config=default_llm_config ) `