Open JustUser1410 opened 5 months ago
Traceback (most recent call last):
File "C:\Users\Tomas\Documents\Python Workspace\gpt-researcher\multi_agents\main.py", line 32, in
Hey @JustUser1410, I'll have a look at this and add graceful timeouts
Hey @JustUser1410, I'll have a look at this and add graceful timeouts
Two important questions please:
@assafelovic Do you have any update on the graceful timeouts?
@Daniel-K-Ivanov in our backlog to be shipped soon. Does this occur often?
@assafelovic Yes, especially when run on local documents that are of considerable size.
Similar to @OX304, I'm having the same issue when working with local documents. When this occurs, the user interface gives no indication of an issue that it stopped working and it just seems that it's taking a long time to complete.
Only in the docker compose logs do you see something like,
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for text-embedding-3-small in organization org-************** on tokens per min (TPM): Limit 1000000, Used 958784, Requested 189929. Please try again in 8.922s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
The solution for this is to either use less documents and a shorter query or gracefully wait until the delayed time passes before continuing. It would be nice if the system doesn't just give up but rather backs off until the try again time, at least in the case of OpenAI. With the premature stop, it just ends up wasting resources from the OpenAI api.
As an aside, thanks for all the work that you guys do with this.
@Daniel-K-Ivanov in our backlog to be shipped soon. Does this occur often?
I'm experience rate limits on 80% of my research attempts. Also curious if there is a limit on volume of text in local file in my-docs? When I enter larger form books it also seems to lock up. Unsure why.
In general there really needs to be some more transparency as to the processes going on. Some kind of addition to the interface that indicates process and allows you to make decisions if an error occurs. such as restart, quit, wait, try again or anything. Also, having to look at terminal for the error info is annoying. I have no idea how all this works, I'm not a coder, I'm a designer. This interface is hard to work with and doesn't offer enough user prompts, feedback etc. Right now I am trying to run a research job and it just stopped. ... no feedback or error or anything. It's been quite a challenge to use this.
I'm also curious if there is a document limit for my-docs. Not just a single document length limitation that is causing problems.
I just also want to say that it's amazing that you've built this. I have no concept of how hard this must be to do!
After spending quite a bit of time and using a chunk of my resources code suddenly halted just to tell me that I need to "wait for 2.75s", but there is no option to continue research. The exception is raised by openai module, but could be handled more gracefully than just dropping all the progress. Agent could just take a break.
So far I have only encountered this while using multi-agent system.
File "C:\Users\Tomas\anaconda3\envs\gpt2\Lib\site-packages\openai\_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o in organization org-xxxxxxxxxxxxxxxxxxxxxxxxx on tokens per min (TPM): Limit 30000, Used 28966, Requested 2409. Please try again in 2.75s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}
Edit: After looking into the code, I realized it's not that simple.