assafelovic / gpt-researcher

GPT based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
MIT License
13.2k stars 1.65k forks source link

Rate limit #62

Closed ItsTiage closed 1 year ago

ItsTiage commented 1 year ago

Hi, I'm using the 'gpt-3.5-turbo-16k' model, I've so far adopted all the fixes for the issues I've encountered that were already solved, and thank you very much for your amazing support.

However I've encountered a new issue regarding the rate limit, here's the output of the error I've encountered:

ERROR: Exception in ASGI application Traceback (most recent call last): File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\protocols\websockets\wsproto_impl.py", line 249, in run_asgi result = await self.app(self.scope, self.receive, self.send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\applications.py", line 289, in call await super().call(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\applications.py", line 122, in call await self.middleware_stack(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\errors.py", line 149, in call await self.app(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in call raise e File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in call await self.app(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 718, in call await route.handle(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 341, in handle await self.app(scope, receive, send) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\starlette\routing.py", line 82, in app await func(session) File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\fastapi\routing.py", line 324, in app await dependant.call(*values) File "C:\Users\tiago\gpt-researcher\main.py", line 50, in websocket_endpoint await manager.start_streaming(task, report_type, agent, websocket) File "C:\Users\tiago\gpt-researcher\agent\run.py", line 38, in start_streaming report, path = await run_agent(task, report_type, agent, websocket) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\gpt-researcher\agent\run.py", line 52, in run_agent report, path = await assistant.write_report(report_type, websocket) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\gpt-researcher\agent\research_agent.py", line 169, in write_report path = await write_md_to_pdf(report_type, self.directory_name, await answer) ^^^^^^^^^^^^ File "C:\Users\tiago\gpt-researcher\agent\llm_utils.py", line 85, in stream_response for chunk in openai.ChatCompletion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create return super().create(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "C:\Users\tiago\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response( openai.error.RateLimitError: Rate limit reached for default-gpt-3.5-turbo-16k in organization org-JsGGDnWMEgbr9x3ZmlIZva3l on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.

skippa commented 1 year ago

Same issue - is it something to do with using the free version of OPENAI? I thought free tier would at least let me have a short play with this...

rotemweiss57 commented 1 year ago

3/min is a bit odd.. I know that free trial users supposed to have 20/min but I might be mistaken. Consider to upgrade or change the number of links per query from 5 to 3 (because the summarizer works simultaneously). This can be done in actions/web_search

Hope it helps!

PTaljaard commented 1 year ago

I have the same problem. Changed the 5 limit to 2 Failed to get response from OpenAI API Also free tier, 3 RPM, after 10x saying: WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... WARNING:root:Rate limit reached, backing off... ERROR:root:Failed to get response after 10 attempts ERROR: Exception in ASGI application So I guess I will have to upgrade and pay the piper.

gregdrizz commented 1 year ago

Do you use a valid api key?

PTaljaard commented 1 year ago

Morning Greg, I had python installed previously, used it for a different project to develop a system (anaconda/spyder) to manage my greenhouses with a django framework. Struggled to get the downloaded gpt-r... working and eventually switched to the docker image - total newbie at docker - got the docker and api keys working (removed the {}. then I made some progress, Could not get further than the issue above. Yet... I am busy with research for a PhD (AI & Ethics & Complexity theory applied to agriculture) so I hope I can get this working to save me months on lit reviews, etc. Thanks for the support.

polygonfuture commented 1 year ago

[UPDATE] Was having same issue.

Ensuring billing information was added to my main account from the OpenAI website solved it.

If anyone else is having this issue, you have to add billing information to the root OpenAI site. The ChatGPT Plus billing information is separate and different.

Adding an image of the site section

Top right corner under your account name: image

Billing overview section: image

Note: This is the view after having subscribed to a paid account.

rotemweiss57 commented 1 year ago

@polygonfuture Thank you for the detailed solution!