AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
[X] I have searched the existing issues, and there is no existing issue for my problem
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Master (branch)
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
it ends up with this, both with 0.2.2 and latest master:
SYSTEM: Human feedback: code is in E:\aigc\Auto-GPT\auto_gpt_workspace\ceph
THOUGHTS: Now that I have access to the code, I can start analyzing it to develop a detailed technical solution for the storage solution.
REASONING: Analyzing the code will give me a better understanding of how to use CephFS and the quincy release of ceph to create a storage solution that meets the requirements.
PLAN:
Analyze the code for CephFS and the quincy release of ceph
Develop a detailed technical solution for the storage solution
CRITICISM: I need to ensure that I am thorough in my analysis of the code to avoid any mistakes in the implementation of the storage solution.
NEXT ACTION: COMMAND = read_file ARGUMENTS = {'filename': 'E:\aigc\Auto-GPT\auto_gpt_workspace\ceph\src\mds\MDSRank.cc'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for ...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "E:\aigc\Auto-GPT\autogpt__main.py", line 5, in
autogpt.cli.main()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 760, in invoke
return callback(*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, *kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\aigc\Auto-GPT\autogpt\cli.py", line 177, in main
agent.start_interaction_loop()
File "E:\aigc\Auto-GPT\autogpt\agent\agent.py", line 213, in start_interaction_loop
self.memory.add(memory_to_add)
File "E:\aigc\Auto-GPT\autogpt\memory\local.py", line 76, in add
embedding = create_embedding_with_ada(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\aigc\Auto-GPT\autogpt\llm_utils.py", line 170, in create_embedding_with_ada
return openai.Embedding.create(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create
response, , api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 34470 tokens (34470 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
请按任意键继续. . .
E:\aigc\Auto-GPT>
Current behavior 😯
read source code exceed openai's token limitation ,make the process exit.
Expected behavior 🤔
it should work with source code files, most of them would be couple KB in size
Your prompt 📝
please read the source code file give me what you find
Your Logs 📒
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "E:\aigc\Auto-GPT\autogpt\__main__.py", line 5, in <module>
autogpt.cli.main()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1635, in invoke
rv = super().invoke(ctx)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\click\decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\aigc\Auto-GPT\autogpt\cli.py", line 177, in main
agent.start_interaction_loop()
File "E:\aigc\Auto-GPT\autogpt\agent\agent.py", line 213, in start_interaction_loop
self.memory.add(memory_to_add)
File "E:\aigc\Auto-GPT\autogpt\memory\local.py", line 76, in add
embedding = create_embedding_with_ada(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\aigc\Auto-GPT\autogpt\llm_utils.py", line 170, in create_embedding_with_ada
return openai.Embedding.create(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 34470 tokens (34470 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
请按任意键继续. . .
⚠️ Search for existing issues first ⚠️
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Master (branch)
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
it ends up with this, both with 0.2.2 and latest master: SYSTEM: Human feedback: code is in E:\aigc\Auto-GPT\auto_gpt_workspace\ceph THOUGHTS: Now that I have access to the code, I can start analyzing it to develop a detailed technical solution for the storage solution. REASONING: Analyzing the code will give me a better understanding of how to use CephFS and the quincy release of ceph to create a storage solution that meets the requirements. PLAN:
E:\aigc\Auto-GPT>
Current behavior 😯
read source code exceed openai's token limitation ,make the process exit.
Expected behavior 🤔
it should work with source code files, most of them would be couple KB in size
Your prompt 📝
Your Logs 📒