Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.57k stars 44.09k forks source link

Maximum context length exceeded after `execute_shell` #3244

Closed gtx-cyber closed 1 year ago

gtx-cyber commented 1 year ago

⚠️ Search for existing issues first ⚠️

Which Operating System are you using?

Linux

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

this error came from an installation of a library within the AutoGPT process while running NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'pip install en_core_web_sm'} Executing command 'pip install en_core_web_sm' in working directory '/home/appuser/auto_gpt_workspace'

Current behavior 😯

openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9956 tokens (9956 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

And the program terminates

Expected behavior 🤔

It should auto reduce token length instead of terminating

Your prompt 📝

# Paste your prompt here

Your Logs 📒

<insert your logs here>
sorokinvj commented 1 year ago

I hit the same with:

NEXT ACTION:  COMMAND = execute_shell ARGUMENTS = {'command_line': 'pip list --outdated'}
Executing command 'pip list --outdated' in working directory '/Users/../Auto-GPT-0.2.2/auto_gpt_workspace'
perrosnk commented 1 year ago

I have experienced the same issue

brngdsn commented 1 year ago

fyi i reran mine after the same kind of crash, and when prompted i told it to, "decrease token size because you keep erroring out," and i mean it worked afterwards so (i also manually accepted each prompt for a few afterwards before giving it -n)

DMTarmey commented 1 year ago

I get error "NEXT ACTION: COMMAND = search_files ARGUMENTS = {'directory': '.'} Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT\autogpt__main.py", line 53, in main() File "C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT\autogpt\main__.py", line 49, in main agent.start_interaction_loop() File "C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT\autogpt\agent\agent.py", line 170, in start_interaction_loop self.memory.add(memory_to_add) File "C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT\autogpt\memory\local.py", line 76, in add embedding = create_embedding_with_ada(text) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT\autogpt\llm_utils.py", line 137, in create_embedding_with_ada return openai.Embedding.create( ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dell User\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\embedding.py", line 33, in create response = super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dell User\AppData\Roaming\Python\Python311\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "C:\Users\Dell User\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Dell User\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\Dell User\AppData\Roaming\Python\Python311\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 163109 tokens (163109 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

C:\Users\Dell User\OneDrive\Desktop\darren\Auto-GPT>" I think this is about tokens any help would be appriciated.