microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
35.1k stars 5.08k forks source link

API failing #209

Closed mkhazen closed 3 months ago

mkhazen commented 1 year ago

Hello

I use colab and all of your examples work fine - I enter my gpt 4 API keys all works fine - and the notebook completes successfully I download the same notebook, then upload it into MS Fabric Synapse Spark (i believe they use version of spark 3.2) - it is unable to connect to GPT API keys either 3.5 turbo or 4.0 or any other gpt

I get this error

user_proxy (to assistant):

Find $a + b + c$, given that $x+y \neq -1$ and \begin{align} ax + by + c & = x + 7,\ a + bx + cy & = 2x + 6y,\ ay + b + cx & = 4x + y. \end{align}.



InvalidRequestError Traceback (most recent call last) Cell In[16], line 11 1 math_problem_to_solve = """ 2 Find $a + b + c$, given that $x+y \neq -1$ and 3 \begin{align} (...) 7 \end{align}. 8 """ 10 # the assistant receives a message from the user, which contains the task description ---> 11 user_proxy.initiate_chat(assistant, message=math_problem_to_solve)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:531, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, context) 517 """Initiate a chat with the recipient agent. 518 519 Reset the consecutive auto reply counter. (...) 528 "message" needs to be provided if the generate_init_message method is not overridden. 529 """ 530 self._prepare_chat(recipient, clear_history) --> 531 self.send(self.generate_init_message(context), recipient, silent=silent)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:334, in ConversableAgent.send(self, message, recipient, request_reply, silent) 332 valid = self._append_oai_message(message, "assistant", recipient) 333 if valid: --> 334 recipient.receive(message, self, request_reply, silent) 335 else: 336 raise ValueError( 337 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided." 338 )

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:462, in ConversableAgent.receive(self, message, sender, request_reply, silent) 460 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False: 461 return --> 462 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender) 463 if reply is not None: 464 self.send(reply, sender, silent=silent)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:779, in ConversableAgent.generate_reply(self, messages, sender, exclude) 777 continue 778 if self._match_trigger(reply_func_tuple["trigger"], sender): --> 779 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) 780 if final: 781 return reply

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:606, in ConversableAgent.generate_oai_reply(self, messages, sender, config) 603 messages = self._oai_messages[sender] 605 # TODO: #1143 handle token limit exceeded error --> 606 response = oai.ChatCompletion.create( 607 context=messages[-1].pop("context", None), messages=self._oai_system_message + messages, **llm_config 608 ) 609 return True, oai.ChatCompletion.extract_text_or_function_call(response)[0]

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/oai/completion.py:789, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, config) 787 base_config["max_retry_period"] = 0 788 try: --> 789 response = cls.create( 790 context, 791 use_cache, 792 raise_on_ratelimit_or_timeout=i < last or raise_on_ratelimit_or_timeout, 793 base_config, 794 ) 795 if response == -1: 796 return response

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/oai/completion.py:820, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, **config) 818 with diskcache.Cache(cls.cache_path) as cls._cache: 819 cls.set_cache(seed) --> 820 return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/autogen/oai/completion.py:210, in Completion._get_response(cls, config, raise_on_ratelimit_or_timeout, use_cache) 208 response = openai_completion.create(config) 209 else: --> 210 response = openai_completion.create(request_timeout=request_timeout, config) 211 except ( 212 ServiceUnavailableError, 213 APIConnectionError, 214 ): 215 # transient error 216 logger.info(f"retrying in {retry_wait_time} seconds...", exc_info=1)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, *kwargs) 23 while True: 24 try: ---> 25 return super().create(args, **kwargs) 26 except TryAgain as e: 27 if timeout is not None and time.time() > start + timeout:

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:155, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, params) 129 @classmethod 130 def create( 131 cls, (...) 138 params, 139 ): 140 ( 141 deployment_id, 142 engine, (...) 152 api_key, api_base, api_type, apiversion, organization, **params 153 ) --> 155 response, , api_key = requestor.request( 156 "post", 157 url, 158 params=params, 159 headers=headers, 160 stream=stream, 161 request_id=request_id, 162 request_timeout=request_timeout, 163 ) 165 if stream: 166 # must be an iterator 167 assert not isinstance(response, OpenAIResponse)

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/openai/api_requestor.py:299, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout) 278 def request( 279 self, 280 method, (...) 287 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]: 289 result = self.request_raw( 290 method.lower(), 291 url, (...) 297 request_timeout=request_timeout, 298 ) --> 299 resp, got_stream = self._interpret_response(result, stream) 300 return resp, got_stream, self.api_key

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/openai/api_requestor.py:710, in APIRequestor._interpret_response(self, result, stream) 702 return ( 703 self._interpret_response_line( 704 line, result.status_code, result.headers, stream=True 705 ) 706 for line in parse_stream(result.iter_lines()) 707 ), True 708 else: 709 return ( --> 710 self._interpret_response_line( 711 result.content.decode("utf-8"), 712 result.status_code, 713 result.headers, 714 stream=False, 715 ), 716 False, 717 )

File /nfs4/pyenv-f4056475-393f-4abe-8234-de9d7e4fbf43/lib/python3.10/site-packages/openai/api_requestor.py:775, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream) 773 stream_error = stream and "error" in resp.data 774 if stream_error or not 200 <= rcode < 300: --> 775 raise self.handle_error_response( 776 rbody, rcode, resp.data, rheaders, stream_error=stream_error 777 ) 778 return resp

InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

is it not supoosed to work in MS fabric? or do we need to change the authentication

sonichi commented 1 year ago

@thinkall might have the answer :)

mkhazen commented 1 year ago

@thinkall is it because MS fabric is running on 3.10? version of python - is there anything i can do such as a custom library to ensure it works with 3.10 because i cannot change the python version in MS fabric

thinkall commented 1 year ago

@thinkall is it because MS fabric is running on 3.10? version of python - is there anything i can do such as a custom library to ensure it works with 3.10 because i cannot change the python version in MS fabric

Hi @mkhazen , the python version is good. Could you share more details about how you set up the API keys for GPT models? The error message shows that maybe you were using openai models but the package called Azure OpenAI endpoints.

sonichi commented 1 year ago

Is it because the default api_type in the MS Fabric env is "azure"?