Closed callect closed 1 year ago
This should be fixed, please reset your .env file and pull the latest stable
I had used the latest, my env file is newest.The Problem is still there.
NEXT ACTION: COMMAND = read_file ARGUMENTS = {'file': 'expression.srt'}
Traceback (most recent call last):
File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\cui\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Administrator\Auto-GPT\autogpt__main.py", line 5, in
same problem here, openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9481 tokens (9481 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
⚠️ Search for existing issues first ⚠️
Which Operating System are you using?
Windows
Which version of Auto-GPT are you using?
Latest Release
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
first I run:
python data_ingestion.py --file expression.srt --max_length 2000 Using memory of type: LocalCache Working with file expression.srt File length: 36519 characters Ingesting chunk 1 / 21 into memory Ingesting chunk 2 / 21 into memory Ingesting chunk 3 / 21 into memory Ingesting chunk 4 / 21 into memory Ingesting chunk 5 / 21 into memory Ingesting chunk 6 / 21 into memory Ingesting chunk 7 / 21 into memory Ingesting chunk 8 / 21 into memory Ingesting chunk 9 / 21 into memory Ingesting chunk 10 / 21 into memory Ingesting chunk 11 / 21 into memory Ingesting chunk 12 / 21 into memory Ingesting chunk 13 / 21 into memory Ingesting chunk 14 / 21 into memory Ingesting chunk 15 / 21 into memory Ingesting chunk 16 / 21 into memory Ingesting chunk 17 / 21 into memory Ingesting chunk 18 / 21 into memory Ingesting chunk 19 / 21 into memory Ingesting chunk 20 / 21 into memory Ingesting chunk 21 / 21 into memory Done ingesting 21 chunks from expression.srt.
next I run autogpt , but still remember me need read_file expression.srt, and when I let it start to read, I got "This model's maximum context length is 8191 tokens, however you requested 14573 tokens" error
Goals: ['get memory chunks with expression.srt', 'use openai_translation agent to Chinese every chunk then save to file name with chunk id.'] Continue (y/n): y Using memory of type: LocalCache Using Browser: chrome THOUGHTS: I should start by getting the memory chunks with expression.srt REASONING: I need to have the memory chunks before I can translate them to Chinese PLAN:
Current behavior 😯
No response
Expected behavior 🤔
No response
Your prompt 📝
Your Logs 📒