Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
Other
168.08k stars 44.35k forks source link

1.how data_ingestion work? 2. read_file always throw "This model's maximum context length is 8191 tokens, however you requested 14573 tokens" #2969

Closed callect closed 1 year ago

callect commented 1 year ago

⚠️ Search for existing issues first ⚠️

Which Operating System are you using?

Windows

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-3.5

Steps to reproduce 🕹

first I run:

python data_ingestion.py --file expression.srt --max_length 2000 Using memory of type: LocalCache Working with file expression.srt File length: 36519 characters Ingesting chunk 1 / 21 into memory Ingesting chunk 2 / 21 into memory Ingesting chunk 3 / 21 into memory Ingesting chunk 4 / 21 into memory Ingesting chunk 5 / 21 into memory Ingesting chunk 6 / 21 into memory Ingesting chunk 7 / 21 into memory Ingesting chunk 8 / 21 into memory Ingesting chunk 9 / 21 into memory Ingesting chunk 10 / 21 into memory Ingesting chunk 11 / 21 into memory Ingesting chunk 12 / 21 into memory Ingesting chunk 13 / 21 into memory Ingesting chunk 14 / 21 into memory Ingesting chunk 15 / 21 into memory Ingesting chunk 16 / 21 into memory Ingesting chunk 17 / 21 into memory Ingesting chunk 18 / 21 into memory Ingesting chunk 19 / 21 into memory Ingesting chunk 20 / 21 into memory Ingesting chunk 21 / 21 into memory Done ingesting 21 chunks from expression.srt.

next I run autogpt , but still remember me need read_file expression.srt, and when I let it start to read, I got "This model's maximum context length is 8191 tokens, however you requested 14573 tokens" error

Goals: ['get memory chunks with expression.srt', 'use openai_translation agent to Chinese every chunk then save to file name with chunk id.'] Continue (y/n): y Using memory of type: LocalCache Using Browser: chrome THOUGHTS: I should start by getting the memory chunks with expression.srt REASONING: I need to have the memory chunks before I can translate them to Chinese PLAN:

Current behavior 😯

No response

Expected behavior 🤔

No response

Your prompt 📝

# Paste your prompt here

Your Logs 📒

<insert your logs here>
ntindle commented 1 year ago

This should be fixed, please reset your .env file and pull the latest stable

callect commented 1 year ago

I had used the latest, my env file is newest.The Problem is still there.

NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'expression.srt'} Traceback (most recent call last): File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\Administrator\Auto-GPT\autogpt__main.py", line 5, in autogpt.cli.main() File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1130, in call return self.main(*args, kwargs) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1055, in main rv = self.invoke(ctx) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1635, in invoke rv = super().invoke(ctx) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 1404, in invoke return ctx.invoke(self.callback, ctx.params) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\core.py", line 760, in invoke return callback(*args, kwargs) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\click\decorators.py", line 26, in new_func return f(get_current_context(), *args, *kwargs) File "C:\Users\Administrator\Auto-GPT\autogpt\cli.py", line 151, in main agent.start_interaction_loop() File "C:\Users\Administrator\Auto-GPT\autogpt\agent\agent.py", line 184, in start_interaction_loop self.memory.add(memory_to_add) File "C:\Users\Administrator\Auto-GPT\autogpt\memory\local.py", line 76, in add embedding = create_embedding_with_ada(text) File "C:\Users\Administrator\Auto-GPT\autogpt\llm_utils.py", line 155, in create_embedding_with_ada return openai.Embedding.create( File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\embedding.py", line 33, in create response = super().create(args, kwargs) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 14525 tokens (14525 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

qiangziakbar commented 1 year ago

same problem here, openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9481 tokens (9481 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.