microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.36k stars 4.57k forks source link

[Bug]: Error on autobuild_agnet_library #3017

Closed JinSeoung-Oh closed 3 months ago

JinSeoung-Oh commented 3 months ago

Describe the bug

Actually, I built Gropchat with AutoGen based on this code : https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb

It was well working, but suddenly it show below error

BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'messages[1].name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'messages[1].name', 'code': 'invalid_value'}}

I tried to fix it, but I could not So I just run code of "https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb", and it also returned below error

BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'messages[1].name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'messages[1].name', 'code': 'invalid_value'}}

I cannot understand why. On this morning it was fine. Please how can I solve this problem?

Steps to reproduce

Just run "https://github.com/microsoft/autogen/blob/main/notebook/autobuild_agent_library.ipynb"

python ==3.10.14 torch == 3.2.0 torchaudio==2.1.1 torchvision==0.16.1 pyautogen==0.2.31 openai==1.30.2

Model Used

gpt-4o

Expected Behavior

Multi-turn conversation between LLM agnet

Screenshots and logs

==> Looking for suitable agents in the library... ['Network Administrator', 'IT Consultant', 'Blockchain Developer'] are selected. ==> Creating agents... Creating agent Network Administrator... Creating agent IT Consultant... Creating agent Blockchain Developer... Adding user console proxy... Network Administrator (to chat_manager):

Find a recent paper about explainable AI on arxiv and find its potential applications in medical.



BadRequestError Traceback (most recent call last) Cell In[57], line 5 1 new_builder = AgentBuilder( 2 config_file_or_env=config_file_or_env, builder_model="gpt-4o", agent_model="gpt-4o" 3 ) 4 agentlist, = new_builder.build_from_library(building_task, library_path_or_json, llm_config) ----> 5 start_task( 6 execution_task="Find a recent paper about explainable AI on arxiv and find its potential applications in medical.", 7 agent_list=agent_list, 8 ) 9 new_builder.clear_all_agents()

Cell In[55], line 13, in start_task(execution_task, agent_list) 11 group_chat = autogen.GroupChat(agents=agent_list, messages=[], max_round=12) 12 manager = autogen.GroupChatManager(groupchat=group_chat, llm_config={"config_list": config_list, **llm_config}) ---> 13 agent_list[0].initiate_chat(manager, message=execution_task)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1018, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, kwargs) 1016 else: 1017 msg2send = self.generate_init_message(message, kwargs) -> 1018 self.send(msg2send, recipient, silent=silent) 1019 summary = self._summarize_chat( 1020 summary_method, 1021 summary_args, 1022 recipient, 1023 cache=cache, 1024 ) 1025 for agent in [self, recipient]:

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent) 653 valid = self._append_oai_message(message, "assistant", recipient) 654 if valid: --> 655 recipient.receive(message, self, request_reply, silent) 656 else: 657 raise ValueError( 658 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided." 659 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent) 816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False: 817 return --> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender) 819 if reply is not None: 820 self.send(reply, sender, silent=silent)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs) 1970 continue 1971 if self._match_trigger(reply_func_tuple["trigger"], sender): -> 1972 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) 1973 if logging_enabled(): 1974 log_event( 1975 self, 1976 "reply_func_executed", (...) 1980 reply=reply, 1981 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/groupchat.py:1047, in GroupChatManager.run_chat(self, messages, sender, config) 1044 break 1045 try: 1046 # select the next speaker -> 1047 speaker = groupchat.select_speaker(speaker, self) 1048 if not silent: 1049 iostream = IOStream.get_default()

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/groupchat.py:538, in GroupChat.select_speaker(self, last_speaker, selector) 535 return self.next_agent(last_speaker) 537 # auto speaker selection with 2-agent chat --> 538 return self._auto_select_speaker(last_speaker, selector, messages, agents)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/groupchat.py:658, in GroupChat._auto_select_speaker(self, last_speaker, selector, messages, agents) 655 start_message = messages[-1] 657 # Run the speaker selection chat --> 658 result = checking_agent.initiate_chat( 659 speaker_selection_agent, 660 cache=None, # don't use caching for the speaker selection chat 661 message=start_message, 662 max_turns=2 663 * max(1, max_attempts), # Limiting the chat to the number of attempts, including the initial one 664 clear_history=False, 665 silent=not self.select_speaker_auto_verbose, # Base silence on the verbose attribute 666 ) 668 return self._process_speaker_selection_result(result, last_speaker, agents)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1011, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs) 1009 if msg2send is None: 1010 break -> 1011 self.send(msg2send, recipient, request_reply=True, silent=silent) 1012 else: 1013 self._prepare_chat(recipient, clear_history)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent) 653 valid = self._append_oai_message(message, "assistant", recipient) 654 if valid: --> 655 recipient.receive(message, self, request_reply, silent) 656 else: 657 raise ValueError( 658 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided." 659 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent) 816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False: 817 return --> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender) 819 if reply is not None: 820 self.send(reply, sender, silent=silent)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs) 1970 continue 1971 if self._match_trigger(reply_func_tuple["trigger"], sender): -> 1972 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) 1973 if logging_enabled(): 1974 log_event( 1975 self, 1976 "reply_func_executed", (...) 1980 reply=reply, 1981 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1340, in ConversableAgent.generate_oai_reply(self, messages, sender, config) 1338 if messages is None: 1339 messages = self._oai_messages[sender] -> 1340 extracted_response = self._generate_oai_reply_from_client( 1341 client, self._oai_system_message + messages, self.client_cache 1342 ) 1343 return (False, None) if extracted_response is None else (True, extracted_response)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:1359, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache) 1356 all_messages.append(message) 1358 # TODO: #1143 handle token limit exceeded error -> 1359 response = llm_client.create( 1360 context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self 1361 ) 1362 extracted_response = llm_client.extract_text_or_completion_object(response)[0] 1364 if extracted_response is None:

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/oai/client.py:697, in OpenAIWrapper.create(self, **config) 695 try: 696 request_ts = get_current_ts() --> 697 response = client.create(params) 698 except APITimeoutError as err: 699 logger.debug(f"config {i} timed out", exc_info=True)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/autogen/oai/client.py:306, in OpenAIClient.create(self, params) 304 params = params.copy() 305 params["stream"] = False --> 306 response = completions.create(**params) 308 return response

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/openai/_utils/_utils.py:277, in required_args..inner..wrapper(*args, *kwargs) 275 msg = f"Missing required argument: {quote(missing[0])}" 276 raise TypeError(msg) --> 277 return func(args, **kwargs)

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/openai/resources/chat/completions.py:590, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout) 558 @required_args(["messages", "model"], ["messages", "model", "stream"]) 559 def create( 560 self, (...) 588 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN, 589 ) -> ChatCompletion | Stream[ChatCompletionChunk]: --> 590 return self._post( 591 "/chat/completions", 592 body=maybe_transform( 593 { 594 "messages": messages, 595 "model": model, 596 "frequency_penalty": frequency_penalty, 597 "function_call": function_call, 598 "functions": functions, 599 "logit_bias": logit_bias, 600 "logprobs": logprobs, 601 "max_tokens": max_tokens, 602 "n": n, 603 "presence_penalty": presence_penalty, 604 "response_format": response_format, 605 "seed": seed, 606 "stop": stop, 607 "stream": stream, 608 "stream_options": stream_options, 609 "temperature": temperature, 610 "tool_choice": tool_choice, 611 "tools": tools, 612 "top_logprobs": top_logprobs, 613 "top_p": top_p, 614 "user": user, 615 }, 616 completion_create_params.CompletionCreateParams, 617 ), 618 options=make_request_options( 619 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout 620 ), 621 cast_to=ChatCompletion, 622 stream=stream or False, 623 stream_cls=Stream[ChatCompletionChunk], 624 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/openai/_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls) 1226 def post( 1227 self, 1228 path: str, (...) 1235 stream_cls: type[_StreamT] | None = None, 1236 ) -> ResponseT | _StreamT: 1237 opts = FinalRequestOptions.construct( 1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options 1239 ) -> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls) 912 def request( 913 self, 914 cast_to: Type[ResponseT], (...) 919 stream_cls: type[_StreamT] | None = None, 920 ) -> ResponseT | _StreamT: --> 921 return self._request( 922 cast_to=cast_to, 923 options=options, 924 stream=stream, 925 stream_cls=stream_cls, 926 remaining_retries=remaining_retries, 927 )

File ~/anaconda3/envs/torch-3.10/lib/python3.10/site-packages/openai/_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls) 1017 err.response.read() 1019 log.debug("Re-raising status error") -> 1020 raise self._make_status_error_from_response(err.response) from None 1022 return self._process_response( 1023 cast_to=cast_to, 1024 options=options, (...) 1027 stream_cls=stream_cls, 1028 )

BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'messages[1].name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'messages[1].name', 'code': 'invalid_value'}}

Additional Information

No response

JinSeoung-Oh commented 3 months ago

Oh.. It was my mistake In the agen_list, I enter the space in one of the agent name. I'm sorry. Maybe when I entered the code manually, I just enter space on it.