OpenBMB / ChatDev

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
https://arxiv.org/abs/2307.07924
Apache License 2.0
24.38k stars 3.06k forks source link

Got RateLimitError #373

Open BlairLee opened 2 months ago

BlairLee commented 2 months ago

Got the following error messages when I ran on my mac. Does it mean I need more quota for my OpenAI API key?

Traceback (most recent call last): File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/tenacity/init.py", line 382, in call result = fn(*args, kwargs) File "/Users/bowenli/Documents/ChatDev/camel/utils.py", line 154, in wrapper return func(self, *args, *kwargs) File "/Users/bowenli/Documents/ChatDev/camel/agents/chat_agent.py", line 239, in step response = self.model_backend.run(messages=openai_messages) File "/Users/bowenli/Documents/ChatDev/camel/model_backend.py", line 100, in run response = client.chat.completions.create(args, kwargs, model=self.model_type.value, File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_utils/_utils.py", line 299, in wrapper return func(*args, **kwargs) File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/resources/chat/completions.py", line 598, in create return self._post( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 1055, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 834, in request return self._request( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 865, in _request return self._retry_request( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 925, in _retry_request return self._request( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 865, in _request return self._retry_request( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 925, in _retry_request return self._request( File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/openai/_base_client.py", line 877, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/Users/bowenli/Documents/ChatDev/run.py", line 134, in chat_chain.execute_chain() File "/Users/bowenli/Documents/ChatDev/chatdev/chat_chain.py", line 168, in execute_chain self.execute_step(phase_item) File "/Users/bowenli/Documents/ChatDev/chatdev/chat_chain.py", line 138, in execute_step self.chat_env = self.phases[phase].execute(self.chat_env, File "/Users/bowenli/Documents/ChatDev/chatdev/phase.py", line 295, in execute self.chatting(chat_env=chat_env, File "/Users/bowenli/Documents/ChatDev/chatdev/utils.py", line 79, in wrapper return func(*args, *kwargs) File "/Users/bowenli/Documents/ChatDev/chatdev/phase.py", line 133, in chatting assistant_response, user_response = role_play_session.step(input_user_msg, chat_turn_limit == 1) File "/Users/bowenli/Documents/ChatDev/camel/agents/role_playing.py", line 247, in step assistant_response = self.assistant_agent.step(user_msg_rst) File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, args, **kw) File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/Users/bowenli/miniconda3/envs/ChatDev_conda_env/lib/python3.9/site-packages/tenacity/init.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x116ec7eb0 state=finished raised RateLimitError>]

jindaff commented 2 months ago

Same problem, waiting to be solved

Linjian-PA commented 2 months ago

same

bluxolguin commented 2 months ago

This is because you exceeded your quota in your openai account

bdv1 commented 2 months ago

I have just purchased 5 dollar in credits and am now "usage tier 1" i added an @retry with a 2 second wait in the model_backend.py file. The log still shows errors. What am i doing wrong?

[2024-20-04 15:10:32 INFO] [Preprocessing]

ChatDev Starts (20240420151032)

Timestamp: 20240420151032

config_path: C:\Users\\ChatDev\CompanyConfig\Default\ChatChainConfig.json

config_phase_path: C:\Users\\ChatDev\CompanyConfig\Default\PhaseConfig.json

config_role_path: C:\Users\\ChatDev\CompanyConfig\Default\RoleConfig.json

task_prompt:

project_name:

Log File: C:\Users\\ChatDev\WareHouse\projectname_DefaultOrganization_20240420151032.log

ChatDevConfig: ChatEnvConfig.with_memory: False ChatEnvConfig.clear_structure: True ChatEnvConfig.git_management: False ChatEnvConfig.gui_design: True ChatEnvConfig.incremental_develop: False ChatEnvConfig.background_prompt: ChatDev is a software company powered by multiple intelligent agents, such as chief executive officer, chief human resources officer, chief product officer, chief technology officer, etc, with a multi-agent organizational structure and the mission of 'changing the digital world through programming'.

ChatGPTConfig: ChatGPTConfig(temperature=0.2, top_p=1.0, n=1, stream=False, stop=None, max_tokens=None, presence_penalty=0.0, frequency_penalty=0.0, logit_bias={}, user='')

[2024-20-04 15:10:34 INFO] flask app.py did not start for online log [2024-20-04 15:10:34 INFO] System: [chatting]

Parameter Value
task_prompt
need_reflect True
assistant_role_name Chief Product Officer
user_role_name Chief Executive Officer
phase_prompt ChatDev has made products in the following form before: Image: can present information in line chart, bar chart, flow chart, cloud chart, Gantt chart, etc. Document: can present information via .docx files. PowerPoint: can present information via .pptx files. Excel: can present information via .xlsx files. PDF: can present information via .pdf files. Website: can present personal resume, tutorial, products, or ideas, via .html files. Application: can implement visualized game, software, tool, etc, via python. Dashboard: can display a panel visualizing real-time information. Mind Map: can represent ideas, with related concepts arranged around a core concept. As the {assistant_role}, to satisfy the new user's demand and the product should be realizable, you should keep discussing with me to decide which product modality do we want the product to be? Note that we must ONLY discuss the product modality and do not discuss anything else! Once we all have expressed our opinion(s) and agree with the results of the discussion unanimously, any of us must actively terminate the discussion by replying with only one line, which starts with a single word , followed by our final product modality without any other words, e.g., " PowerPoint".
phase_name DemandAnalysis
assistant_role_prompt {chatdev_prompt} You are Chief Product Officer. we are both working at ChatDev. We share a common interest in collaborating to successfully complete a task assigned by a new customer. You are responsible for all product-related matters in ChatDev. Usually includes product design, product strategy, product vision, product innovation, project management and product marketing. Here is a new customer's task: {task}. To complete the task, you must write a response that appropriately solves the requested instruction based on your expertise and customer's needs.
user_role_prompt {chatdev_prompt} You are Chief Executive Officer. Now, we are both working at ChatDev and we share a common interest in collaborating to successfully complete a task assigned by a new customer. Your main responsibilities include being an active decision-maker on users' demands and other key policy issues, leader, manager, and executor. Your decision-making role involves high-level decisions about policy and strategy; and your communicator role can involve speaking to the organization's management and employees. Here is a new customer's task: {task}. To complete the task, I will give you one or more instructions, and you must help me to write a specific solution that appropriately solves the requested instruction based on your expertise and my needs.
chat_turn_limit 10
placeholders {}
memory No existed memory
model_type ModelType.GPT_3_5_TURBO_NEW

[2024-20-04 15:10:36 INFO] flask app.py did not start for online log [2024-20-04 15:10:36 INFO] System: [RolePlaying]

Parameter Value
assistant_role_name Chief Product Officer
user_role_name Chief Executive Officer
assistant_role_prompt {chatdev_prompt} You are Chief Product Officer. we are both working at ChatDev. We share a common interest in collaborating to successfully complete a task assigned by a new customer. You are responsible for all product-related matters in ChatDev. Usually includes product design, product strategy, product vision, product innovation, project management and product marketing. Here is a new customer's task: {task}. To complete the task, you must write a response that appropriately solves the requested instruction based on your expertise and customer's needs.
user_role_prompt {chatdev_prompt} You are Chief Executive Officer. Now, we are both working at ChatDev and we share a common interest in collaborating to successfully complete a task assigned by a new customer. Your main responsibilities include being an active decision-maker on users' demands and other key policy issues, leader, manager, and executor. Your decision-making role involves high-level decisions about policy and strategy; and your communicator role can involve speaking to the organization's management and employees. Here is a new customer's task: {task}. To complete the task, I will give you one or more instructions, and you must help me to write a specific solution that appropriately solves the requested instruction based on your expertise and my needs.
task_prompt
with_task_specify False
memory No existed memory
model_type ModelType.GPT_3_5_TURBO_NEW
background_prompt ChatDev is a software company powered by multiple intelligent agents, such as chief executive officer, chief human resources officer, chief product officer, chief technology officer, etc, with a multi-agent organizational structure and the mission of 'changing the digital world through programming'.

[2024-20-04 15:10:38 INFO] flask app.py did not start for online log [2024-20-04 15:10:38 INFO] Chief Executive Officer: [Start Chat]

[ChatDev is a software company powered by multiple intelligent agents, such as chief executive officer, chief human resources officer, chief product officer, chief technology officer, etc, with a multi-agent organizational structure and the mission of 'changing the digital world through programming'. You are Chief Product Officer. we are both working at ChatDev. We share a common interest in collaborating to successfully complete a task assigned by a new customer. You are responsible for all product-related matters in ChatDev. Usually includes product design, product strategy, product vision, product innovation, project management and product marketing. Here is a new customer's task: To complete the task, you must write a response that appropriately solves the requested instruction based on your expertise and customer's needs.]

ChatDev has made products in the following form before:

Image: can present information in line chart, bar chart, flow chart, cloud chart, Gantt chart, etc.

Document: can present information via .docx files.

PowerPoint: can present information via .pptx files.

Excel: can present information via .xlsx files.

PDF: can present information via .pdf files.

Website: can present personal resume, tutorial, products, or ideas, via .html files.

Application: can implement visualized game, software, tool, etc, via python.

Dashboard: can display a panel visualizing real-time information.

Mind Map: can represent ideas, with related concepts arranged around a core concept.

As the Chief Product Officer, to satisfy the new user's demand and the product should be realizable, you should keep discussing with me to decide which product modality do we want the product to be?

Note that we must ONLY discuss the product modality and do not discuss anything else! Once we all have expressed our opinion(s) and agree with the results of the discussion unanimously, any of us must actively terminate the discussion by replying with only one line, which starts with a single word , followed by our final product modality without any other words, e.g., " PowerPoint".

[2024-20-04 15:10:40 INFO] flask app.py did not start for online log [2024-20-04 15:10:42 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:42 INFO] Retrying request to /chat/completions in 0.912740 seconds [2024-20-04 15:10:43 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:43 INFO] Retrying request to /chat/completions in 1.551145 seconds [2024-20-04 15:10:45 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:48 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:48 INFO] Retrying request to /chat/completions in 0.888024 seconds [2024-20-04 15:10:49 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:49 INFO] Retrying request to /chat/completions in 1.601650 seconds [2024-20-04 15:10:51 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:53 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:53 INFO] Retrying request to /chat/completions in 0.969518 seconds [2024-20-04 15:10:54 INFO] HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 429 Too Many Requests" [2024-20-04 15:10:54 INFO] Retrying request to /chat/completions in 1.589606 seconds

thinkwee commented 2 months ago

We have updated the backend LLM configurations, please try again~

rwtews commented 1 month ago

Same problem, waiting to be solved

LPsyCongroo commented 1 month ago

Same here

bdv1 commented 1 month ago

I got it to work now. I use the gpt 3.5 turbo model. 4.5 still gives me a ratelimit error ( had to change some imports that read from the wrong utils.py file as well)

LPsyCongroo commented 1 month ago

I got it to work now. I use the gpt 3.5 turbo model. 4.5 still gives me a ratelimit error ( had to change some imports that read from the wrong utils.py file as well)

Can you share the changes you made to fix the imports?

bdv1 commented 4 weeks ago

Hi, sorry for the wait. I was a bit busy. What is changed was:

changed in ecl/memory.py file (changed utils to ecl.utils)

from utils import get_easyDict_from_filepath,log_and_print_online

from ecl.utils import get_easyDict_from_filepath,log_and_print_online

changed pydantic_core core_schema.py (commented the line out)

from typing_extensions import deprecated

in ecl/code.py file (utils to ecl.utils) from ecl.utils import get_easyDict_from_filepath

good luck!