Closed ArneJanning closed 6 months ago
Thanks for noting this @ArneJanning .
I am looking at the OAI_CONFIG_LIST documentation here and it looks like the correct formatting for azure openai models is
{
"model": "<your Azure OpenAI deployment name>",
"api_key": "<your Azure OpenAI API key here>",
"base_url": "<your Azure OpenAI API base here>",
"api_type": "azure",
"api_version": "2023-07-01-preview"
}
Note that the difference is api_base
-> base_url
.
What do you get when you try the above?
Thank you for your answer @victordibia .
I changed the api_base
to base_url
and still get the same error:
`autogenra ui --port 8081
INFO: Started server process [105794]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8081 (Press CTRL+C to quit)
INFO: 127.0.0.1:56470 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:56470 - "GET /app-a69e64a20490df34caf8.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:56482 - "GET /framework-34c4775408c4a2a587eb.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:56470 - "GET /webpack-runtime-9a1c1a0dc45832ebafbd.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:56482 - "GET /page-data/index/page-data.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:56470 - "GET /page-data/app-data.json HTTP/1.1" 200 OK
INFO: 127.0.0.1:56470 - "GET /component---src-pages-index-tsx-9c8c5671443f8293283f.js HTTP/1.1" 200 OK
INFO: 127.0.0.1:56470 - "GET /skills?user_id=guestuser@gmail.com HTTP/1.1" 200 OK
INFO: 127.0.0.1:56482 - "GET /messages?user_id=guestuser@gmail.com HTTP/1.1" 200 OK
INFO: 127.0.0.1:56488 - "GET /manifest.webmanifest HTTP/1.1" 200 OK
Traceback (most recent call last):
File "/home/arne/src/autogen3/autogen/samples/apps/autogen-assistant/autogenra/web/app.py", line 85, in add_message response_message: Message = chatmanager.chat( ^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/autogen/samples/apps/autogen-assistant/autogenra/autogenchat.py", line 24, in chat flow = AutoGenFlow(config=flow_config, history=history, work_dir=scratch_dir, asst_prompt=skills_suffix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/autogen/samples/apps/autogen-assistant/autogenra/autogenflow.py", line 26, in init self.sender = self.load(config.sender) ^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/autogen/samples/apps/autogen-assistant/autogenra/autogenflow.py", line 116, in load agent = autogen.UserProxyAgent(**asdict(agent_spec.config)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/autogen/agentchat/user_proxy_agent.py", line 72, in init super().init(
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py", line 115, in init self.client = OpenAIWrapper(**self.llm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/autogen/oai/client.py", line 77, in init self._clients = [self._client(config, openai_config) for config in config_list] # could modify the config ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/autogen/oai/client.py", line 77, in
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/autogen/oai/client.py", line 138, in _client client = OpenAI(**openai_config) ^^^^^^^^^^^^^^^^^^^^^^^
File "/home/arne/src/autogen3/.venv/lib/python3.11/site-packages/openai/_client.py", line 93, in init raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable `
I get the same error with OpenAI api_key. No matter how I set get_default_agent_config().
I tried
llm_config = LLMConfig( config_list=[{ "model": "gpt-4", "api_key": "<myKey>", }], temperature=0, )
and also tried the old way I use in autogen
config_list = [ { 'model': 'gpt-4', 'api_key': "<myKey>", }, ] llm_config = { "cache_seed": 42, "temperature": 0, "config_list": config_list, }
Yeah i've the same issue, whatever i used, the message stay here the api_key is not passed to the original openai library
It works with txt files with the
config_list = autogen.config_list_openai_aoai(key_file_path=".\keys",exclude={"azure"})
If i haven't a open_ai key it didn't work :
chainlit = "^0.7.700" openai = "^1.3.6" langchain = "^0.0.343" pyautogen = "^0.2.0"
I get the same error with OpenAI api_key. No matter how I set get_default_agent_config(). I tried
llm_config = LLMConfig( config_list=[{ "model": "gpt-4", "api_key": "<myKey>", }], temperature=0, )
and also tried the old way I use in autogenconfig_list = [ { 'model': 'gpt-4', 'api_key': "<myKey>", }, ] llm_config = { "cache_seed": 42, "temperature": 0, "config_list": config_list, }
I notice that we must edit the config on the frontend and enter api_key there. That explains the keep coming error.
The api-version is missing, so I change lib/python3.11/site-packages/openai/_base_client.py at about line 863 to include this parameter. After this change, the azure model set in frontend works.
url = request.url
# set url parameter to api-version=2023-07-01-preview
url = url.copy_set_param('api-version', '2023-07-01-preview')
I've tried coding like this, and then run python test/twoagent.py success.
config_list = [ { 'model': 'gpt-3.5-turbo', 'api_key': "xx","base_url":"xxx","api_type":"azure","api_version": "2023-07-01-preview" }]
The api-version is missing, so I change lib/python3.11/site-packages/openai/_base_client.py at about line 863 to include this parameter. After this change, the azure model set in frontend works.
url = request.url # set url parameter to api-version=2023-07-01-preview url = url.copy_set_param('api-version', '2023-07-01-preview')
I'm sorry, I still don't get it. Where would you specify the Azure model in the frontend? And how? I tried to change getDefaultConfigFlows() in frontend/src/components/views/utils.ts to
export const getDefaultConfigFlows = () => { const llm_model_config: IModelConfig[] = [ { model: "gpt4-0613", api_key: "<key>", base_url: "<azure url>", api_type: "azure" }, ];
but without success.
The error openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
just doesn't go away, whatever I do on the frontend or the backend.
To fix this, try to change code at autogen-assistant/frontend/src/components/atoms.tsx. Add the following code, then cd autogen-assistant/frontend, use yarn build
rebuild the frontend.
After that, when add mode, you need fill the Api Version, then save it.
Now you can use the Azure OpenAI.
When direct pip and run autogenra or autogenstudio from cmd, how to specify azure OpenAI config file? and I tried to edit the model config (add key/base_url, model type and version etc) from autogenstudio Build page, it did not take effect too. Maybe need to add a config parameter for this, e.g. autogenstudio ui --config "config_file"
Looking into it. Thanks
Get Outlook for iOShttps://aka.ms/o0ukef
From: Yanzhi Li @.> Sent: Saturday, December 16, 2023 8:32:34 AM To: microsoft/autogen @.> Cc: Comment @.>; Assign @.> Subject: Re: [microsoft/autogen] autogen-assistant doesn't work on Azure OpenAI (Issue #755)
When direct pip and run autogenra or autogenstudio from cmd, how to specify azure OpenAI config file? and I tried to edit the model config (add key/base_url, model type and version etc) from autogenstudio Build page, it did not take effect too. Maybe need to add a config parameter for this, e.g. autogenstudio ui --config "config_file"
— Reply to this email directly, view it on GitHubhttps://github.com/microsoft/autogen/issues/755#issuecomment-1858861660 or unsubscribehttps://github.com/notifications/unsubscribe-auth/AALZV765QAUQFYTVWLYJXRTYJXEKFBFKMF2HI4TJMJ2XIZLTSWBKK5TBNR2WLJDUOJ2WLJDOMFWWLO3UNBZGKYLEL5YGC4TUNFRWS4DBNZ2F6YLDORUXM2LUPGBKK5TBNR2WLJDUOJ2WLJDOMFWWLLTXMF2GG2C7MFRXI2LWNF2HTAVFOZQWY5LFUVUXG43VMWSG4YLNMWVXI2DSMVQWIX3UPFYGLAVFOZQWY5LFVI3DCMJVGQ3TENJVHCSG4YLNMWUWQYLTL5WGCYTFNSBKK5TBNR2WLKRWGEZDSMBTGE2TKONENZQW2ZNJNBQXGX3MMFRGK3FMON2WE2TFMN2F65DZOBS2YSLTON2WKQ3PNVWWK3TUUZ2G64DJMNZZJAVEOR4XAZNKOJSXA33TNF2G64TZUV3GC3DVMWUTMOBQGEZDAMBXGGBKI5DZOBS2K2LTON2WLJLWMFWHKZNKGIYDAOJXGA4DKMZZQKSHI6LQMWSWYYLCMVWKK5TBNR2WLKRWGEYTKNBXGI2TKOECUR2HS4DFUVWGCYTFNSSXMYLMOVS2UNRRGI4TAMZRGU2TTJ3UOJUWOZ3FOKTGG4TFMF2GK. You are receiving this email because you commented on the thread.
Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
my issue was llm_config
was not passed to autogen.groupchatmanager function. After the fix, no more api_key missing error.
BEFORE:
groupchat = autogen.GroupChat(agents=[user_proxy, assistant], messages=[], max_round=MAX_ITER)
manager = autogen.GroupChatManager(groupchat=groupchat)
AFTER:
groupchat = autogen.GroupChat(agents=[user_proxy, assistant], messages=[], max_round=MAX_ITER)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
Here is my llm_config:
config_list = [
{
"model": "gpt-35-turbo",
"api_key": os.environ.get("AZURE_OPENAI_API_KEY"),
"base_url": os.environ.get("AZURE_OPENAI_API_BASE"),
"api_type": "azure",
"api_version": "2023-07-01-preview",
}
]
llm_config={
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
}
When direct pip and run autogenra or autogenstudio from cmd, how to specify azure OpenAI config file? and I tried to edit the model config (add key/base_url, model type and version etc) from autogenstudio Build page, it did not take effect too. Maybe need to add a config parameter for this, e.g. autogenstudio ui --config "config_file"
I had the same doubt about using run autogenstudio ui from cmd
HI @huangpan2507 ,
AutoGen studio is a UI interface on top of AutoGen. Currently, it is built such that the user specifies the agent workflow configuration in the UI (rather than a config file).
If you edit the the
get_default_agent_config()
-Method in utils.py to use Azure OpenAI, as described in the documentation:`def get_default_agent_config(work_dir: str, skills_suffix: str = "") -> FlowConfig: """ Get a default agent flow config . """
` Then you'll receive an error:
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable