AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
When performing normally I encountered an overloaded OpenAI API isse that quit AutoGPT and therefore lost progress.
I asked GPT-4 about what it thought and it suggested to implement a retry functionality that waits and then continues again when encountering issues like that to not loose progress.
Examples 🌈
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\autogpt\__main__.py", line 50, in <module>
main()
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\autogpt\__main__.py", line 46, in main
agent.start_interaction_loop()
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\autogpt\agent\agent.py", line 181, in start_interaction_loop
self.memory.add(memory_to_add)
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\autogpt\memory\weaviate.py", line 57, in add
vector = get_ada_embedding(data)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\autogpt\memory\base.py", line 19, in get_ada_embedding
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\venv\Lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\venv\Lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\Users\emilr\Documents\AutoGPT\auto-gpt\venv\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.RateLimitError: The server is currently overloaded with other requests. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists.
Motivation 🔦
Perhaps a retry feature could be a useful addition to AutoGPT to automize and keep performing even when encountering errors and issues like that.
Duplicates
Summary 💡
When performing normally I encountered an overloaded OpenAI API isse that quit AutoGPT and therefore lost progress.
I asked GPT-4 about what it thought and it suggested to implement a retry functionality that waits and then continues again when encountering issues like that to not loose progress.
Examples 🌈
Motivation 🔦
Perhaps a retry feature could be a useful addition to AutoGPT to automize and keep performing even when encountering errors and issues like that.