assafelovic / gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
MIT License
12.98k stars 1.61k forks source link

Code halts if OpenAI rate limit is reached. openai.RateLimitError: Error code: 429 #614

Open JustUser1410 opened 1 week ago

JustUser1410 commented 1 week ago

After spending quite a bit of time and using a chunk of my resources code suddenly halted just to tell me that I need to "wait for 2.75s", but there is no option to continue research. The exception is raised by openai module, but could be handled more gracefully than just dropping all the progress. Agent could just take a break.

So far I have only encountered this while using multi-agent system.

File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai\_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o in organization org-xxxxxxxxxxxxxxxxxxxxxxxxx on tokens per min (TPM): Limit 30000, Used 28966, Requested 2409. Please try again in 2.75s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}

Edit: After looking into the code, I realized it's not that simple.

JustUser1410 commented 1 week ago

Traceback (most recent call last): File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\main.py", line 32, in asyncio.run(main()) File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\asyncio\runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\asyncio\runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\asyncio\base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\main.py", line 27, in main research_report = await chief_editor.run_research_task() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\master.py", line 57, in run_research_task result = await chain.ainvoke({"task": self.task}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel__init.py", line 1504, in ainvoke async for chunk in self.astream( File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel__init__.py", line 1333, in astream _panic_or_proceed(done, inflight, step) File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel__init.py", line 1537, in _panic_or_proceed raise exc File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel\retry.py", line 120, in arun_with_retry await task.proc.ainvoke(task.input, task.config) File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\runnables\base.py", line 2540, in ainvoke input = await step.ainvoke(input, config, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\utils.py", line 117, in ainvoke ret = await asyncio.create_task( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\editor.py", line 85, in run_parallel_research research_results = [result['draft'] for result in await asyncio.gather(final_drafts)] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel\init.py", line 1504, in ainvoke async for chunk in self.astream( File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel\init__.py", line 1333, in astream _panic_or_proceed(done, inflight, step) File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel\init__.py", line 1537, in _panic_or_proceed raise exc File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\pregel\retry.py", line 120, in arun_with_retry await task.proc.ainvoke(task.input, task.config) File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\runnables\base.py", line 2540, in ainvoke input = await step.ainvoke(input, config, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langgraph\utils.py", line 117, in ainvoke ret = await asyncio.create_task( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\runnables\config.py", line 557, in run_in_executor return await asyncio.get_running_loop().run_in_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\concurrent\futures\thread.py", line 58, in run result = self.fn(*self.args, *self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\runnables\config.py", line 548, in wrapper return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\reviser.py", line 46, in run revision = self.revise_draft(draft_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\reviser.py", line 41, in revise_draft response = call_model(prompt, model=task.get("model"), response_format='json') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\agents\utils\llms.py", line 14, in call_model response = ChatOpenAI(model=model, max_retries=max_retries, model_kwargs=optional_params).invoke(lc_messages).content ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 170, in invoke self.generate_prompt( File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate raise e File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate self._generate_with_cache( File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\langchain_openai\chat_models\base.py", line 537, in _generate response = self.client.create(messages=message_dicts, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_utils_utils.py", line 277, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai\resources\chat\completions.py", line 606, in create return self._post( ^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1240, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 921, in request return self._request( ^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1005, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1005, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1053, in _retry_request return self._request( ^^^^^^^^^^^^^^ File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o in organization org-xxxxxxxxxxxxxxxxxxxxxxxxx on tokens per min (TPM): Limit 30000, Used 28966, Requested 2409. Please try again in 2.75s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}

assafelovic commented 1 week ago

Hey @JustUser1410, I'll have a look at this and add graceful timeouts

Mitchell-xiyunfeng commented 6 days ago

Hey @JustUser1410, I'll have a look at this and add graceful timeouts

Two important questions please:

  1. First important question: Will GPT Researcher deployed on the RepoCloud also be able to synchronize updates once these improvements are completed?
  2. The second important question: How does GPT Researcher deployed on the RepoCloud use local documents for knowledge QA? After making sure to add the DOC_PATH environment variable pointing to the documents folder.