Closed Nbtguyoriginal closed 1 year ago
I've been getting these OpenAI timeouts lately as well. It seems they sometimes hold the connection open, and take a while to respond (perhaps as an alternative to returning a rate limit error).
Reading through the code, I think your max_retry_period
setting may be too low (the timeout is 260, while the default max_retry_period is 120). Try setting max_retry_period
to something like 2 - 3x the timeout, so that you have a few chances instead of failing on the first timeout.
Looks like this is an issue timeout because of Open AI endpoints, not with the framework.
"https://github.com/microsoft/autogen/issues/411#issuecomment-1778502055"
so switching to a different endpoint should fix the issue
https://github.com/microsoft/autogen/issues/411#issuecomment-1778457915
guidance on which file that's inside of
-max_retry_period
I believe you set that in the llm_config
no change i also revised the roles to ensure no redundancy -same error as before tried changing seeds adjusting the instructions what should i do next ?
gpt4_config = {
"seed": 3,
"max_retry_period": 1000,
"temperature": 0.3,
"config_list": config_list,
"timeout": 400 # default to 60 seconds, but can be adjusted as needed
}
gpt4_config2 = {
"seed": 4,
"max_retry_period": 1000,
"temperature": 0,
"config_list": config_list2,
"timeout": 400 # default to 60 seconds, but can be adjusted as needed
}
gpt4_config = {
"seed": 3,
"max_retry_period": 1000,
"temperature": 0.3,
"config_list": config_list,
"timeout": 400 # default to 60 seconds, but can be adjusted as needed
}
gpt4_config2 = {
"seed": 4,
"max_retry_period": 1000,
"temperature": 0,
"config_list": config_list2,
"timeout": 400 # default to 60 seconds, but can be adjusted as needed
}
# Create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="Developer",
human_input_mode="NEVER",
max_consecutive_auto_reply=50,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "coding",
"use_docker": False,
}
)
# Define each role
project_manager = autogen.AssistantAgent(
name="project_manager",
llm_config=gpt4_config2,
system_message="""Oversees the entire project, ensures tasks are completed and delivered quickly, manages risks, and communicates with other members of the conversation. Ensures that the project aligns with the goal of creating a secure messaging platform using Discord webhooks. When asked a question will always attempt to answer them to drive the project to completion."""
)
python_lead_developer = autogen.AssistantAgent(
name="python_lead_developer",
llm_config=gpt4_config2,
system_message="Leads the Python development team and oversees the main compilation of the final product. Makes key architectural decisions and ensures the development aligns with the project's goals. Collaborates closely with the Python Expert to review and refine the codebase, ensuring its correctness and quality. Together, they address any discrepancies or issues in the Python code before finalization.",
)
python_expert = autogen.AssistantAgent(
name="python_expert",
llm_config=gpt4_config,
system_message="Offers deep expertise in Python programming. Collaborates with the Python Lead Developer to review and refine the codebase. Advises on best practices related to code creation, code optimizations, and advanced Python techniques. Assists in integrating Discord webhooks and developing the local program for data transportation.",
)
frontend_developer = autogen.AssistantAgent(
name="frontend_developer",
llm_config=gpt4_config,
system_message="Designs and implements the user interface for both user and admin pages. Ensures seamless integration with the backend and a responsive design for various devices.",
)
Backend_developer = autogen.AssistantAgent(
name="Backend_developer",
llm_config=gpt4_config,
system_message="Responsible for server-side application logic and integration of the work front-end developers do. Designs and implements APIs, database architecture, and ensures performance and reliability of the server-side of the application.",
)
database_administrator = autogen.AssistantAgent(
name="database_administrator",
llm_config=gpt4_config2,
system_message="Sets up and maintains the database for user data and chat logs. Implements efficient retrieval mechanisms and ensures data integrity and security.",
)
security_expert = autogen.AssistantAgent(
name="security_expert",
llm_config=gpt4_config,
system_message="Ensures end-to-end security of the application. Implements authentication and authorization mechanisms, monitors webhook security,",
)
Front_end_security_expert = autogen.AssistantAgent(
name="Front_end_security_expert",
llm_config=gpt4_config,
system_message="Ensures front end security of the application. by conducting detailed tests on the functionality of the front end user entry points",
)
qa_tester = autogen.AssistantAgent(
name="qa_tester",
llm_config=gpt4_config,
system_message="Conducts thorough testing of the application and features, focusing on functionality, security, and user experience. Collaborates with developers to address identified issues.",
)
devops_engineer = autogen.AssistantAgent(
name="devops_engineer",
llm_config=gpt4_config2,
system_message="Manages the deployment pipeline, ensures smooth CI/CD processes, and sets up the necessary infrastructure for scalability and reliability.",
)
researcher = autogen.AssistantAgent(
name="researcher",
llm_config=gpt4_config,
system_message="Explores new methods or technologies that can enhance the platform. Assists in understanding the potential of Discord webhooks how to use them and sophisticated examples of applications and use cases as well as recommends improvements. to overall code functionality.",
)
documentation_writer = autogen.AssistantAgent(
name="documentation_writer",
llm_config=gpt4_config,
system_message="Creates comprehensive user manuals, API documentation, and internal documentation to guide both end-users and developers. The documents should be well organized and comprehensive in explaining its subject",
)
planner = autogen.AssistantAgent(
name="planner",
llm_config=gpt4_config,
system_message="Collaborates with the Project Manager to break down tasks into their subsidiary parts, set milestones and deadlines for when problems should be solved by suggesting the order of completion, helps with and manages allocating resources efficiently.",
)
product_manager = autogen.AssistantAgent(
name="product_manager",
llm_config=gpt4_config,
system_message="Aligns the product with market needs, sets the product number id throughout development, gathers feedback, and ensures the platform meets user expectations. and meets all requirements for deployment ",
)
ui_ux_designer = autogen.AssistantAgent(
name="ui_ux_designer",
llm_config=gpt4_config2,
system_message="Designs intuitive and aesthetically pleasing interfaces. Focuses on user experience, ensuring easy navigation and a cohesive design language and appealing colors throughout with detailed animations.",
)
network_engineer = autogen.AssistantAgent(
name="network_engineer",
llm_config=gpt4_config,
system_message="Ensures robust and secure application free of known and potential vulnerabilities.",
)
human_resources = autogen.AssistantAgent(
name="human_resources",
llm_config=gpt4_config,
system_message="Oversees team well-being, manages hiring and onboarding, ensures a positive work environment, and addresses any personnel-related concerns pushes each member to do their very best in a positive manner.",
)
training_development = autogen.AssistantAgent(
name="training_development",
llm_config=gpt4_config,
system_message="Ensures the team is up-to-date with the latest technologies and best practices. Provides training sessions and continuous learning opportunities.",
)
critic = autogen.AssistantAgent(
name="critic",
system_message="""Critic. You are a helpful assistant highly skilled in evaluating the quality of a given ideas of code by providing a score from 1 (bad) - 10 (good) while providing clear rationale. YOU MUST CONSIDER VISUALIZATION BEST PRACTICES for each evaluation. Specifically, you can carefully evaluate the code across the following dimensions
- bugs (bugs): are there bugs, logic errors, syntax error or typos? Are there any reasons why the code may fail to compile? How should it be fixed? If ANY bug exists, the bug score MUST be less than 5.
- Data transformation (transformation): Is the data transformed appropriately for the visualization type? E.g., is the dataset appropriated filtered, aggregated, or grouped if needed? If a date field is used, is the date field first converted to a date object etc?
- Goal compliance (compliance): how well the code meets the specified visualization goals?
- Visualization type (type): CONSIDERING BEST PRACTICES, is the visualization type appropriate for the data and intent? Is there a visualization type that would be more effective in conveying insights? If a different visualization type is more appropriate, the score MUST BE LESS THAN 5.
- Data encoding (encoding): Is the data encoded appropriately for the visualization type?
- aesthetics (aesthetics): Are the aesthetics of the visualization appropriate for the visualization type and the data?
- if no code is provide you are to create a skeleton code base of what should have been propvided otherwise your main tasks of evaluation of ideals and code itself
YOU MUST PROVIDE A SCORE for each of the above dimensions. if appicable otherwise return the needed skeleton code and instructions
{bugs: 0, transformation: 0, compliance: 0, type: 0, encoding: 0, aesthetics: 0}
Do not suggest code. if its already provided
Finally, based on the critique above, suggest a concrete list of actions that the coder should take to improve the code.
""",
llm_config=gpt4_config2,
)
groupChat = autogen.GroupChat(
agents=[
user_proxy, project_manager, python_lead_developer, python_expert, frontend_developer,
Backend_developer, database_administrator, security_expert, Front_end_security_expert,
qa_tester, devops_engineer, researcher, documentation_writer, planner, product_manager,
ui_ux_designer, network_engineer, human_resources, training_development, critic
],
messages=[],
max_round=10
)
manager = autogen.GroupChatManager(groupchat=groupChat, llm_config=gpt4_config)
Shoot, I just noticed that I'm setting a different parameter! request_timeout
in the llm_config. Can you give that a try?
config_list = config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={"model": ["gpt-4"]},
)
assistant = AssistantAgent(
"assistant",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
llm_config={"request_timeout": 180, "config_list": config_list},
)
Traceback (most recent call last): File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 449, in _make_request six.raise_from(e, None) File "", line 3, in raise_from
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 444, in _make_request
httplib_response = conn.getresponse()
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 1375, in getresponse
response.begin()
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 318, in begin
version, status, reason = self._read_status()
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\http\client.py", line 279, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1278, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\ssl.py", line 1134, in read
return self._sslobj.read(len, buffer)
TimeoutError: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 486, in send resp = conn.urlopen( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen retries = retries.increment( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\util\retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\packages\six.py", line 770, in reraise raise value File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen httplib_response = self._make_request( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 451, in _make_request self._raise_timeout(err=e, url=url, timeout_value=read_timeout) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\urllib3\connectionpool.py", line 340, in _raise_timeout raise ReadTimeoutError( urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 596, in request_raw result = _thread_context.session.request( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, kwargs) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\requests\adapters.py", line 532, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=60)
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "C:\Users\knigh\OneDrive\Desktop\auto gen\working gen\gen.py", line 186, in
user_proxy.initiate_chat(manager, message=
bash"Project Title: Minecraft Utility Client with Integrated LLM Agents. Description: Develop a comprehensive Minecraft utility client designed to enhance the gameplay experience for both single-player and multiplayer server modes. This utility client will seamlessly integrate multiple LLM (Learning and Logic Modules) agents, allowing players to interactively add new features and functionalities in real-time. Primary Features: 1. LLM Agents Integration: Embed multiple Learning and Logic Modules (LLM) that can be activated, modified, or deactivated according to player preferences. 2. Dynamic Main Menu Interface: A user-friendly main menu interface for accessing and managing the LLM agents, ensuring a streamlined user experience. 3. Real-time Feature Addition: Allow players to add new gameplay features without disrupting their current game, regardless of whether they are in single-player mode or on a server. 4. Compatibility: Ensure the utility client remains compatible with both single-player and multiplayer server modes, avoiding any potential conflicts or disruptions. Programming Language: Java. Recommended Libraries & Frameworks: 1. Minecraft Forge or Fabric: Utilize Minecraft modding platforms such as Forge or Fabric for creating and integrating the utility client. 2. JavaFX: For building the main menu interface and ensuring a modern and responsive user experience. 3. Any suitable ML library (if required by LLM agents): If the LLM agents utilize machine learning, integrate a Java-compatible ML library such as Deeplearning4j. Best Practices: 1. Modular Design: Structure the codebase in a modular fashion to facilitate easy additions, updates, or removals of LLM agents in the future. 2. Optimization: Ensure the utility client does not significantly impact the game's performance. Regularly test and optimize the client to minimize its resource footprint. 3. User Feedback System: Implement a feedback mechanism within the utility client to gather player insights, which can be invaluable for future improvements. 4. Regular Updates: Keep the utility client updated with the latest versions of Minecraft, ensuring consistent compatibility and performance."
) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 531, in initiate_chat self.send(self.generate_init_message(context), recipient, silent=silent) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 334, in send recipient.receive(message, self, request_reply, silent) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 462, in receive reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\groupchat.py", line 129, in run_chat reply = speaker.generate_reply(sender=self) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 781, in generate_reply final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"]) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py", line 606, in generate_oai_reply response = oai.ChatCompletion.create( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\oai\completion.py", line 799, in create response = cls.create( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\oai\completion.py", line 830, in create return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\oai\completion.py", line 220, in _get_response response = openai_completion.create(request_timeout=request_timeout, config) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "C:\Users\knigh\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 288, in request result = self.request_raw(