Closed imfleabagg closed 4 months ago
same here
`Using memory of type: JSONFileMemory
Using Browser: chrome
Error parsing JSON response with literal_eval invalid syntax (
JSON Validation Error: 'thoughts' is a required property
Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {} JSON Validation Error: 'command' is a required property
Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {}
NEXT ACTION: COMMAND = None ARGUMENTS = None Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, 'n' to exit program, or enter feedback for ServerlessGPT... Asking user via keyboard... Input:`
fixed on my side with creating a fresh ai_settings.yaml
fixed on my side with creating a fresh ai_settings.yaml
hum, got it again a bit after in another request (master branch)
Not sure what is causing this , I am using stable branch, would you mind sharing how you "fixed" it previously? Again sorry, fairly new to this
I keep getting that error no matter what i do as well!
Update: So seems like the work around for this issue is to use Pinecone or Redis as the (MEMORY_BACKEND=)
If your using AutoGPT via Git or local install you will need to create accounts with the above mentioned or Use Docker setup which includes Redis setup.
Personally just made an account with Pincone as docker setup was a headace as a novice to these things. hope it helps
Much of this will be fixed when we move to OpenAI functions. In the mean time we can try to figure out a solution.
On Tue, Jun 20, 2023 at 9:05 AM ewave-design @.***> wrote:
I keep getting that error no matter what i do as well!
— Reply to this email directly, view it on GitHub https://github.com/Significant-Gravitas/Auto-GPT/issues/4752#issuecomment-1599089207, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAABQXF2MVO4SNVC6DRU3Q3XMHC3XANCNFSM6AAAAAAZNAX7RE . You are receiving this because you are subscribed to this thread.Message ID: @.***>
I am also getting this error. I'm also not a developer and not familiar with what Redis or Pinecone are.
fixed on my side with creating a fresh ai_settings.yaml
How do you create a fresh ai_settings.yaml and what does that exactly do?
FWIW, I too am experiencing this. Worked well for 20ish iterations before bombing out.
This is my first time running it, so I'm not sure if it's a recent commit issue, or a bug that has existed for a while.
same error
NEXT ACTION: COMMAND = read_file ARGUMENTS = {'filename': '/home/Auto-GPT/autogpt/auto_gpt_workspace/pdf2word.py'}
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, 'n' to exit program, or enter feedback for PDF2WordGPT...
Asking user via keyboard...
Input:y
-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Text length: 219 tokens
Max chunk length: 3545 tokens
Failed to update API costs: KeyError: 'gpt-35-turbo'
Failed to update API costs: KeyError: 'ada'
Failed to update API costs: KeyError: 'ada'
SYSTEM: Command read_file returned: import PyPDF2 import docx def convert_pdf_to_word(pdf_file_path, word_file_path): # Open the PDF file pdf_file = open(pdf_file_path, 'rb') pdf_reader = PyPDF2.PdfReader(pdf_file) # Create a new Word document doc = docx.Document() # Loop through each page of the PDF file for page_num in range(pdf_reader.numPages): # Extract the text from the page page = pdf_reader.getPage(page_num) text = page.extractText() # Add the text to the Word document doc.add_paragraph(text) # Save the Word document doc.save(word_file_path) if __name__ == '__main__': # Get the PDF file path and Word file path from the user pdf_file_path = input('Enter the path of the PDF file: ') word_file_path = input('Enter the path of the Word file: ') # Convert the PDF file to a Word document convert_pdf_to_word(pdf_file_path, word_file_path)
Error parsing JSON response with literal_eval unterminated string literal (detected at line 1) (<unknown>, line 1)
Error parsing JSON response with literal_eval '{' was never closed (<unknown>, line 1)
Error parsing JSON response with literal_eval '{' was never closed (<unknown>, line 1)
Error parsing JSON response with literal_eval '{' was never closed (<unknown>, line 1)
Error parsing JSON response with literal_eval '{' was never closed (<unknown>, line 1)
Failed to update API costs: KeyError: 'gpt-35-turbo'
Failed to update API costs: KeyError: 'gpt-35-turbo'
Error parsing JSON response with literal_eval '{' was never closed (<unknown>, line 1)
For what it's worth, I reverted from the latest version (v0.4.1
) back to v0.4.0
(Docker container in my case) and that seems to still work fine, unless this is some kind of external sporadic issue.
@Dids same reverting to v0.4.0 worked for me
@Dids @kelteseth how did you guys revert the version and ran it? sorry very new to all this and tooling
I got this fixed on ubuntu. ran the command "sudo chmod -R 777 autogpt/" while in the Auto-GPT folder.
@Dids @kelteseth how did you guys revert the version and ran it? sorry very new to all this and tooling
I actually changed i docker-compose.yml from: image: significantgravitas/auto-gpt to: image: significantgravitas/auto-gpt:v0.4.0
but then I got a new error: FAILED FILE VALIDATION The file prompt_settings.yaml
wasn't found
I copied the file from the repo, still doesn't work ¯_(ツ)_/¯
Same here. macOsBigSur/Docker/GPT3.5/Auto-gpt 4.1
2023-06-23 20:38:34,651 ERROR logs:_log:143 Error parsing JSON response with literal_eval invalid syntax (
Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {}
looks like reverting should work
@Dids @kelteseth how did you guys revert the version and ran it? sorry very new to all this and tooling
I actually changed i docker-compose.yml from: image: significantgravitas/auto-gpt to: image: significantgravitas/auto-gpt:v0.4.0
but then I got a new error: FAILED FILE VALIDATION The file
prompt_settings.yaml
wasn't foundI copied the file from the repo, still doesn't work ¯(ツ)/¯
Same issue as @lucasmocellin
I was able to revert to 0.4.0, and then apply this hotfix:
Not sure if this is the same but i am getting a lot of these as well "Error parsing JSON response with literal_eval '{' was never closed (
Have the same issue. If anyone in the future find a way to solve it plz kick my ass.
I'm also lining up here for ass-kicking like @QvQQ
same issue after ti was working for a day or two.
I hope it helps someone: version significantgravitas/auto-gpt:v0.3.1 is the one that works for me. simply make a empty directory bot/ or something
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt:v0.3.1
depends_on:
redis:
condition: service_started
env_file:
- .env
environment:
MEMORY_BACKEND: ${MEMORY_BACKEND:-redis}
REDIS_HOST: ${REDIS_HOST:-redis}
profiles: ["exclude-from-up"]
volumes:
- ./auto_gpt_workspace:/app/autogpt/auto_gpt_workspace
- ./data:/app/data
## allow auto-gpt to write logs to disk
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
#- type: bind
# source: ./azure.yaml
# target: /app/azure.yaml
#- type: bind
# source: ./ai_settings.yaml
# target: /app/ai_settings.yaml
redis:
image: "redis/redis-stack-server:latest"
OPENAI_API_KEY=sk---anystringhere
OPENAI_API_BASE=https://your_domain:443/v1
sudo docker-compose run --rm auto-gpt --gpt3only
in that directoryHope it helps! 🥇
The problem is that gpt sometimes responds in markdown format and starts the response with "``json". To force responses in bare json, you can add the beginning of the response to the request.
prompt.append(Message("assistant", """thoughts: {text:"""))`
The problem is that the beginning of the response should be merged with the gpt response.
The problem is the OPENAI_FUNCTIONS setting. Turn OPENAI_FUNCTIONS=False and the SMART_LLM=gpt-4-0314
Sorry this is a bit long, but I am trying to give a comprehensive explanation and what I tried. I have the exact same error as mentioned at the top of this thread.
I am getting an intermittent error as below. Further down, I’ve explained the different things I’ve tried but nothing seems to work. Any thoughts or ideas greatly appreciated.
I am trying to follow the tutorial at
https://lablab.ai/t/autogpt-tutorial-how-to-use-and-create-agent-for-coding-game
(though I did not follow their setup instructions. That was because initially, from my previous setup (which had been working for a week or so with various experiments, albeit, all in manual mode), I was not having errors. But since I started getting issues, I’ve tried all sorts of setup routes but everything is breaking with the above parsing error, usually after one or two attempts at the prompt mentioned below.
Here is the prompt, initial results and the error:
So at the prompt I enter: AI Agent for coding
And then I get the following output;
NOTE:All files/directories created by this agent can be found inside its workspace at: /app/auto_gpt_workspace
CodeGeniusGPT has been created with the following details:
Name: CodeGeniusGPT
Role: an advanced AI coding assistant that helps developers and programmers in writing efficient and error-free code by providing expert guidance, suggestions, and solutions for various programming languages and frameworks.
Goals:
- Assist in problem-solving, debugging, and troubleshooting to help you overcome coding challenges and improve the quality of your code.
- Offer personalized code recommendations and best practices to optimize performance, enhance readability, and ensure adherence to coding standards.
- Provide comprehensive documentation and explanations for complex concepts, algorithms, and design patterns to enhance your understanding and knowledge of coding principles.
- Stay up-to-date with the latest programming trends, libraries, and frameworks to offer relevant and cutting-edge solutions for your coding needs.
- Foster a collaborative coding environment by facilitating code reviews, pair programming, and knowledge sharing to promote continuous learning and growth.
Could not load MemoryItems from file: input length is 0: line 1 column 1 (char 0)
Using memory of type: JSONFileMemory
Using Browser: chrome
Error parsing JSON response with literal_eval invalid syntax (<unknown>, line 2)
Response could not be parsed: Validation of response failed:
'thoughts' is a required property
Failed validating 'required' in schema:
{'$schema': 'http://json-schema.org/draft-07/schema#',
'additionalProperties': False,
'properties': {'command': {'additionalProperties': False,
'properties': {'args': {'type': 'object'},
'name': {'type': 'string'}},
'required': ['name', 'args'],
'type': 'object'},
'thoughts': {'additionalProperties': False,
'properties': {'criticism': {'description': 'constructive '
'self-criticism',
'type': 'string'},
'plan': {'description': '- '
'short '
'bulleted\n'
'- '
'list '
'that '
'conveys\n'
'- '
'long-term '
'plan',
'type': 'string'},
'reasoning': {'type': 'string'},
'speak': {'description': 'thoughts '
'summary '
'to '
'say '
'to '
'user',
'type': 'string'},
'text': {'description': 'thoughts',
'type': 'string'}},
'required': ['text',
'reasoning',
'plan',
'criticism',
'speak'],
'type': 'object'}},
'required': ['thoughts', 'command'],
'type': 'object'}
On instance:
{};
'command' is a required property
Failed validating 'required' in schema:
{'$schema': 'http://json-schema.org/draft-07/schema#',
'additionalProperties': False,
'properties': {'command': {'additionalProperties': False,
'properties': {'args': {'type': 'object'},
'name': {'type': 'string'}},
'required': ['name', 'args'],
'type': 'object'},
'thoughts': {'additionalProperties': False,
'properties': {'criticism': {'description': 'constructive '
'self-criticism',
'type': 'string'},
'plan': {'description': '- '
'short '
'bulleted\n'
'- '
'list '
'that '
'conveys\n'
'- '
'long-term '
'plan',
'type': 'string'},
'reasoning': {'type': 'string'},
'speak': {'description': 'thoughts '
'summary '
'to '
'say '
'to '
'user',
'type': 'string'},
'text': {'description': 'thoughts',
'type': 'string'}},
'required': ['text',
'reasoning',
'plan',
'criticism',
'speak'],
'type': 'object'}},
'required': ['thoughts', 'command'],
'type': 'object'}
On instance:
{}
CODEGENIUSGPT THOUGHTS:
REASONING: None
CRITICISM: None
NO ACTION SELECTED: The Agent failed to select an action.
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for CodeGeniusGPT...
Asking user via keyboard...
Input:
Since encountering this, I’ve wiped out my install and tried a few things, as below.
I have an openai API paid plan and key. To check it’s being detected in the .env file, if I take the key out, it’ll ask me to enter it at the command line. I then use the instructions at https://docs.agpt.co/setup/ There are several routes through. I do find them unclear. So here is what I tried (note I did also try instructions you can download on autogpt.net but same results):
First attempt:
Set up with Docker:
mkdir AutoGPT
cd AutoGPT
Create and save the following into docker-compose.yml (I have docker installed)
version: "3.9"
services:
auto-gpt:
image: significantgravitas/auto-gpt
env_file:
- .env
profiles: ["exclude-from-up"]
volumes:
- ./auto_gpt_workspace:/app/auto_gpt_workspace
- ./data:/app/data
## allow auto-gpt to write logs to disk
- ./logs:/app/logs
## uncomment following lines if you want to make use of these files
## you must have them existing in the same folder as this docker-compose.yml
#- type: bind
# source: ./azure.yaml
# target: /app/azure.yaml
#- type: bind
# source: ./ai_settings.yaml
# target: /app/ai_settings.yaml
Save a .env file into the same location with the following changes from the .env.template
OPENAI_API_KEY=<my key>
OPENAI_ORGANIZATION=<my org>
SMART_LLM=gpt-4
FAST_LLM=gpt-3.5-turbo
Note, I’ve tried this with JSON file default and got similar issues.
REDIS_HOST=localhost
## REDIS_PORT - Redis port (Default: 6379)
REDIS_PORT=6379
## REDIS_PASSWORD - Redis password (Default: "")
REDIS_PASSWORD=<my password>
## MEMORY_BACKEND - Memory backend type
# MEMORY_BACKEND=json_file
## MEMORY_INDEX - Value used in the Memory backend for scoping, naming, or indexing (Default: auto-gpt)
# MEMORY_INDEX=auto-gpt
## GOOGLE_API_KEY - Google API key (Default: None)
GOOGLE_API_KEY=<my key>
## GOOGLE_CUSTOM_SEARCH_ENGINE_ID - Google custom search engine ID (Default: None)
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=<my id>
ELEVENLABS_API_KEY=<my key>
then
docker pull significantgravitas/auto-gpt
Then onto the run with docker section
docker compose version
Docker Compose version v2.22.0-desktop.2
So I think that’s ok.
docker compose build auto-gpt
(actually, thinking about it, that is maybe redundant...)
Just gives
[+] Building 0.0s (0/0)
Then do
docker compose run --rm auto-gpt
And that will seem to startup:
[+] Building 0.0s (0/0) docker:desktop-linux
[+] Building 0.0s (0/0) docker:desktop-linux
plugins_config.yaml does not exist, creating base config.
NEWS: Welcome to Auto-GPT!
NEWS:
NEWS:
Welcome to Auto-GPT! run with '--help' for more information.
Create an AI-Assistant: input '--manual' to enter manual mode.
Asking user via keyboard...
I want Auto-GPT to:
So at the prompt I enter: AI Agent for coding
Then I get a long error above.
I then try changing the .env by commenting out all the REDIS related lines. I gave it the same prompt. Result is the same error. So that route seems to be a failure.
Running it multiple times, I found the error is intermittent.
So then wiped all that and started through the git route from the agpt docs:
git clone https://github.com/Significant-Gravitas/AutoGPT.git
Gives the error
warning: the following paths have collided (e.g. case-sensitive paths
on a case-insensitive filesystem) and only one from the same
colliding group is in the working tree:
'arena/TestAgent.json'
'arena/testAgent.json'
Tried a few things to get to the stable branch but probably did it wrong so wanted to start fresh with what I thought more likely to work;
git clone -b stable --single-branch https://github.com/Significant-Gravitas/AutoGPT.git
Gives:
Cloning into 'AutoGPT'...
remote: Enumerating objects: 12692, done.
remote: Total 12692 (delta 0), reused 0 (delta 0), pack-reused 12692
Receiving objects: 100% (12692/12692), 4.98 MiB | 5.93 MiB/s, done.
Resolving deltas: 100% (8551/8551), done.
Seemed to work. Put the .env back in.
docker compose build auto-gpt Seemed to work:
docker compose run --rm auto-gpt
Gives:
[+] Building 0.0s (0/0) docker:desktop-linux
[+] Building 0.0s (0/0) docker:desktop-linux
plugins_config.yaml does not exist, creating base config.
NEWS: Welcome to Auto-GPT!
NEWS:
NEWS:
Welcome to Auto-GPT! run with '--help' for more information.
Create an AI-Assistant: input '--manual' to enter manual mode.
Asking user via keyboard...
I want Auto-GPT to: AI Agent for coding
NOTE:All files/directories created by this agent can be found inside its workspace at: /app/auto_gpt_workspace
CodeGeniusGPT has been created with the following details:
Name: CodeGeniusGPT
Role: an AI coding assistant that helps developers and programmers in writing efficient and error-free code by providing expert guidance, suggestions, and solutions for various programming languages and frameworks.
Goals:
- Assist in problem-solving, debugging, and troubleshooting to help you overcome coding challenges and improve the quality of your code.
- Offer personalized code recommendations and best practices to optimize performance, enhance readability, and ensure adherence to coding standards.
- Provide comprehensive documentation and explanations for complex concepts, algorithms, and design patterns to enhance your understanding and knowledge of coding principles.
- Stay up-to-date with the latest programming languages, frameworks, and tools to offer relevant and cutting-edge solutions for your coding needs.
- Foster a collaborative coding environment by facilitating code reviews, pair programming, and knowledge sharing to promote continuous learning and growth.
Could not load MemoryItems from file: input length is 0: line 1 column 1 (char 0)
Using memory of type: JSONFileMemory
Using Browser: chrome
| Thinking...
Then I intermittently get the same error.
Tried doing ./run.sh
but same results, i.e. intermittent error.
As per someone’s suggestion above, I tried:
SMART_LLM=gpt-4-0314
But I just get an error saying I don’t have access to it.
Per another suggestion, I also tried:
OPENAI_FUNCTIONS=False
But just got the same parsing error.
Any ideas greatly appreciated.
Sorry this is a bit long, but I am trying to give a comprehensive explanation and what I tried. I have the exact same error as mentioned at the top of this thread.
I am getting an intermittent error as below. Further down, I’ve explained the different things I’ve tried but nothing seems to work. Any thoughts or ideas greatly appreciated.
I am trying to follow the tutorial at
https://lablab.ai/t/autogpt-tutorial-how-to-use-and-create-agent-for-coding-game
(though I did not follow their setup instructions. That was because initially, from my previous setup (which had been working for a week or so with various experiments, albeit, all in manual mode), I was not having errors. But since I started getting issues, I’ve tried all sorts of setup routes but everything is breaking with the above parsing error, usually after one or two attempts at the prompt mentioned below.
Here is the prompt, initial results and the error:
So at the prompt I enter: AI Agent for coding
And then I get the following output;
NOTE:All files/directories created by this agent can be found inside its workspace at: /app/auto_gpt_workspace CodeGeniusGPT has been created with the following details: Name: CodeGeniusGPT Role: an advanced AI coding assistant that helps developers and programmers in writing efficient and error-free code by providing expert guidance, suggestions, and solutions for various programming languages and frameworks. Goals: - Assist in problem-solving, debugging, and troubleshooting to help you overcome coding challenges and improve the quality of your code. - Offer personalized code recommendations and best practices to optimize performance, enhance readability, and ensure adherence to coding standards. - Provide comprehensive documentation and explanations for complex concepts, algorithms, and design patterns to enhance your understanding and knowledge of coding principles. - Stay up-to-date with the latest programming trends, libraries, and frameworks to offer relevant and cutting-edge solutions for your coding needs. - Foster a collaborative coding environment by facilitating code reviews, pair programming, and knowledge sharing to promote continuous learning and growth. Could not load MemoryItems from file: input length is 0: line 1 column 1 (char 0) Using memory of type: JSONFileMemory Using Browser: chrome Error parsing JSON response with literal_eval invalid syntax (<unknown>, line 2) Response could not be parsed: Validation of response failed: 'thoughts' is a required property Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'} On instance: {}; 'command' is a required property Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'} On instance: {} CODEGENIUSGPT THOUGHTS: REASONING: None CRITICISM: None NO ACTION SELECTED: The Agent failed to select an action. Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for CodeGeniusGPT... Asking user via keyboard... Input:
Since encountering this, I’ve wiped out my install and tried a few things, as below.
I have an openai API paid plan and key. To check it’s being detected in the .env file, if I take the key out, it’ll ask me to enter it at the command line. I then use the instructions at https://docs.agpt.co/setup/ There are several routes through. I do find them unclear. So here is what I tried (note I did also try instructions you can download on autogpt.net but same results):
First attempt:
Set up with Docker:
mkdir AutoGPT cd AutoGPT
Create and save the following into docker-compose.yml (I have docker installed)
version: "3.9" services: auto-gpt: image: significantgravitas/auto-gpt env_file: - .env profiles: ["exclude-from-up"] volumes: - ./auto_gpt_workspace:/app/auto_gpt_workspace - ./data:/app/data ## allow auto-gpt to write logs to disk - ./logs:/app/logs ## uncomment following lines if you want to make use of these files ## you must have them existing in the same folder as this docker-compose.yml #- type: bind # source: ./azure.yaml # target: /app/azure.yaml #- type: bind # source: ./ai_settings.yaml # target: /app/ai_settings.yaml
Save a .env file into the same location with the following changes from the .env.template
OPENAI_API_KEY=<my key> OPENAI_ORGANIZATION=<my org> SMART_LLM=gpt-4 FAST_LLM=gpt-3.5-turbo
Note, I’ve tried this with JSON file default and got similar issues.
REDIS_HOST=localhost ## REDIS_PORT - Redis port (Default: 6379) REDIS_PORT=6379 ## REDIS_PASSWORD - Redis password (Default: "") REDIS_PASSWORD=<my password> ## MEMORY_BACKEND - Memory backend type # MEMORY_BACKEND=json_file ## MEMORY_INDEX - Value used in the Memory backend for scoping, naming, or indexing (Default: auto-gpt) # MEMORY_INDEX=auto-gpt ## GOOGLE_API_KEY - Google API key (Default: None) GOOGLE_API_KEY=<my key> ## GOOGLE_CUSTOM_SEARCH_ENGINE_ID - Google custom search engine ID (Default: None) GOOGLE_CUSTOM_SEARCH_ENGINE_ID=<my id> ELEVENLABS_API_KEY=<my key>
then
docker pull significantgravitas/auto-gpt
Then onto the run with docker sectiondocker compose version Docker Compose version v2.22.0-desktop.2
So I think that’s ok.
docker compose build auto-gpt
(actually, thinking about it, that is maybe redundant...)Just gives
[+] Building 0.0s (0/0)
Then do
docker compose run --rm auto-gpt
And that will seem to startup:
[+] Building 0.0s (0/0) docker:desktop-linux [+] Building 0.0s (0/0) docker:desktop-linux plugins_config.yaml does not exist, creating base config. NEWS: Welcome to Auto-GPT! NEWS: NEWS: Welcome to Auto-GPT! run with '--help' for more information. Create an AI-Assistant: input '--manual' to enter manual mode. Asking user via keyboard... I want Auto-GPT to:
So at the prompt I enter: AI Agent for coding
Then I get a long error above.
I then try changing the .env by commenting out all the REDIS related lines. I gave it the same prompt. Result is the same error. So that route seems to be a failure.
Running it multiple times, I found the error is intermittent.
So then wiped all that and started through the git route from the agpt docs:
git clone https://github.com/Significant-Gravitas/AutoGPT.git
Gives the error
warning: the following paths have collided (e.g. case-sensitive paths on a case-insensitive filesystem) and only one from the same colliding group is in the working tree: 'arena/TestAgent.json' 'arena/testAgent.json'
Tried a few things to get to the stable branch but probably did it wrong so wanted to start fresh with what I thought more likely to work;
git clone -b stable --single-branch https://github.com/Significant-Gravitas/AutoGPT.git
Gives:Cloning into 'AutoGPT'... remote: Enumerating objects: 12692, done. remote: Total 12692 (delta 0), reused 0 (delta 0), pack-reused 12692 Receiving objects: 100% (12692/12692), 4.98 MiB | 5.93 MiB/s, done. Resolving deltas: 100% (8551/8551), done.
Seemed to work. Put the .env back in.
docker compose build auto-gpt Seemed to work:
docker compose run --rm auto-gpt
Gives:[+] Building 0.0s (0/0) docker:desktop-linux [+] Building 0.0s (0/0) docker:desktop-linux plugins_config.yaml does not exist, creating base config. NEWS: Welcome to Auto-GPT! NEWS: NEWS: Welcome to Auto-GPT! run with '--help' for more information. Create an AI-Assistant: input '--manual' to enter manual mode. Asking user via keyboard... I want Auto-GPT to: AI Agent for coding NOTE:All files/directories created by this agent can be found inside its workspace at: /app/auto_gpt_workspace CodeGeniusGPT has been created with the following details: Name: CodeGeniusGPT Role: an AI coding assistant that helps developers and programmers in writing efficient and error-free code by providing expert guidance, suggestions, and solutions for various programming languages and frameworks. Goals: - Assist in problem-solving, debugging, and troubleshooting to help you overcome coding challenges and improve the quality of your code. - Offer personalized code recommendations and best practices to optimize performance, enhance readability, and ensure adherence to coding standards. - Provide comprehensive documentation and explanations for complex concepts, algorithms, and design patterns to enhance your understanding and knowledge of coding principles. - Stay up-to-date with the latest programming languages, frameworks, and tools to offer relevant and cutting-edge solutions for your coding needs. - Foster a collaborative coding environment by facilitating code reviews, pair programming, and knowledge sharing to promote continuous learning and growth. Could not load MemoryItems from file: input length is 0: line 1 column 1 (char 0) Using memory of type: JSONFileMemory Using Browser: chrome | Thinking...
Then I intermittently get the same error. Tried doing
./run.sh
but same results, i.e. intermittent error.As per someone’s suggestion above, I tried:
SMART_LLM=gpt-4-0314
But I just get an error saying I don’t have access to it. Per another suggestion, I also tried:OPENAI_FUNCTIONS=False
But just got the same parsing error.Any ideas greatly appreciated.
Tried many of the same steps and suggestions, ended up here as well. For the amount of people using AutoGPT successfully and the amount encountering the same error there is clearly a missing link somewhere. I can't event get --help to successfully load all the way.
I hope it helps someone: version significantgravitas/auto-gpt:v0.3.1 is the one that works for me. simply make a empty directory bot/ or something
- create docker-compose.yaml
version: "3.9" services: auto-gpt: image: significantgravitas/auto-gpt:v0.3.1 depends_on: redis: condition: service_started env_file: - .env environment: MEMORY_BACKEND: ${MEMORY_BACKEND:-redis} REDIS_HOST: ${REDIS_HOST:-redis} profiles: ["exclude-from-up"] volumes: - ./auto_gpt_workspace:/app/autogpt/auto_gpt_workspace - ./data:/app/data ## allow auto-gpt to write logs to disk - ./logs:/app/logs ## uncomment following lines if you want to make use of these files ## you must have them existing in the same folder as this docker-compose.yml #- type: bind # source: ./azure.yaml # target: /app/azure.yaml #- type: bind # source: ./ai_settings.yaml # target: /app/ai_settings.yaml redis: image: "redis/redis-stack-server:latest"
- create .env i use LocalAI to host my local LLM's so set the proper url, for me i had to use SSL I created proxy pass with certbot for that. (It was refusing connection...)
OPENAI_API_KEY=sk---anystringhere OPENAI_API_BASE=https://your_domain:443/v1
- run
sudo docker-compose run --rm auto-gpt --gpt3only
in that directoryHope it helps! 🥇
when following exactly what you said I am able to run on LM_STUDIO but still have a little problem
Using memory of type: RedisMemory
Using Browser: chrome
Apparently json was fixed.
The JSON object is invalid.
THOUGHTS: None
REASONING: None
CRITICISM: None
NEXT ACTION: COMMAND = Error: ARGUMENTS = Missing 'command' object in JSON
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, 'n' to exit program, or enter feedback for ...
Asking user via keyboard...
Input:-y
SYSTEM: Human feedback: -y
Apparently json was fixed.
The JSON object is invalid.
THOUGHTS: None
REASONING: None
CRITICISM: None
NEXT ACTION: COMMAND = Error: ARGUMENTS = Missing 'command' object in JSON
Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands, 'n' to exit program, or enter feedback for ...
Asking user via keyboard...
I hope it helps someone: version significantgravitas/auto-gpt:v0.3.1 is the one that works for me. simply make a empty directory bot/ or something
- create docker-compose.yaml
version: "3.9" services: auto-gpt: image: significantgravitas/auto-gpt:v0.3.1 depends_on: redis: condition: service_started env_file: - .env environment: MEMORY_BACKEND: ${MEMORY_BACKEND:-redis} REDIS_HOST: ${REDIS_HOST:-redis} profiles: ["exclude-from-up"] volumes: - ./auto_gpt_workspace:/app/autogpt/auto_gpt_workspace - ./data:/app/data ## allow auto-gpt to write logs to disk - ./logs:/app/logs ## uncomment following lines if you want to make use of these files ## you must have them existing in the same folder as this docker-compose.yml #- type: bind # source: ./azure.yaml # target: /app/azure.yaml #- type: bind # source: ./ai_settings.yaml # target: /app/ai_settings.yaml redis: image: "redis/redis-stack-server:latest"
- create .env i use LocalAI to host my local LLM's so set the proper url, for me i had to use SSL I created proxy pass with certbot for that. (It was refusing connection...)
OPENAI_API_KEY=sk---anystringhere OPENAI_API_BASE=https://your_domain:443/v1
- run
sudo docker-compose run --rm auto-gpt --gpt3only
in that directoryHope it helps! 🥇
Yo man, I am trying localai too now could you tell if you have gpu acceleration ? which embedding model did you choose ? which llm did you work with ?
I cannot Get this to work- same thing.
here is an exsample --> All packages are installed. WARNING: You do not have access to gpt-3.5-turbo. Setting fast_llm to gpt-3.5-turbo. WARNING: You do not have access to gpt-4-0314. Setting smart_llm to gpt-3.5-turbo. NEWS: Welcome to Auto-GPT! NEWS: NEWS: Welcome back! Would you like me to return to being BMIGPT? Asking user via keyboard... Continue with the last settings? Name: BMIGPT Role: A knowledgeable and empathetic assistant who helps users track their body mass index (BMI) and provides personal Goals: ["Accurately calculate the user's BMI based on their height, weight, and age.", 'Provide clear and concise expla API Budget: infinite Continue (y/n): y NOTE:All files/directories created by this agent can be found inside its workspace at: C:\Users\James Knox\Downloads\AutoGPT-stable\auto_gpt_workspace BMIGPT has been created with the following details: Name: BMIGPT Role: A knowledgeable and empathetic assistant who helps users track their body mass index (BMI) and provides personalized recommendations for a healthy lifestyle. Goals:
Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {}; 'command' is a required property
Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {} BMIGPT THOUGHTS: REASONING: None CRITICISM: None NO ACTION SELECTED: The Agent failed to select an action. Enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter feedback for BMIGPT... Asking user via keyboard... Input:
I figured This Out --> Failed validating 'required' in schema: {'$schema': 'http://json-schema.org/draft-07/schema#', 'additionalProperties': False, 'properties': {'command': {'additionalProperties': False, 'properties': {'args': {'type': 'object'}, 'name': {'type': 'string'}}, 'required': ['name', 'args'], 'type': 'object'}, 'thoughts': {'additionalProperties': False, 'properties': {'criticism': {'description': 'constructive ' 'self-criticism', 'type': 'string'}, 'plan': {'description': '- ' 'short ' 'bulleted\n' '- ' 'list ' 'that ' 'conveys\n' '- ' 'long-term ' 'plan', 'type': 'string'}, 'reasoning': {'type': 'string'}, 'speak': {'description': 'thoughts ' 'summary ' 'to ' 'say ' 'to ' 'user', 'type': 'string'}, 'text': {'description': 'thoughts', 'type': 'string'}}, 'required': ['text', 'reasoning', 'plan', 'criticism', 'speak'], 'type': 'object'}}, 'required': ['thoughts', 'command'], 'type': 'object'}
On instance: {}
If you are using a local server or whatever your server settings for your model is. Sure that in the Prompt formatting it is not inserting new lines.
It worked for me give it a try!
The problem is the OPENAI_FUNCTIONS setting. Turn OPENAI_FUNCTIONS=False and the SMART_LLM=gpt-4-0314
Seems to help
The problem is the OPENAI_FUNCTIONS setting. Turn OPENAI_FUNCTIONS=False and the SMART_LLM=gpt-4-0314
Maybe this is working because this is realated to :
https://platform.openai.com/docs/guides/function-calling
We are aware of an issue with non-ASCII outputs in gpt-3.5-turbo-1106 and gpt-4-1106-preview, and are working on implementing a fix. When these models generate a function call and the arguments include non-ASCII characters, the API may return Unicode escape sequences instead of the Unicode character directly. For example, arguments may look like {"location": "D\u00fcsseldorf"} instead of {"location": "Düsseldorf"}. Most applications should not be affected by this, as JSON parsers in languages like Python and Javascript will parse these strings into the correct objects. To stay updated on this topic, please subscribe to [this community forum thread](https://community.openai.com/t/gpt-4-1106-preview-messes-up-function-call-parameters-encoding/478500).
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
⚠️ Search for existing issues first ⚠️
Which Operating System are you using?
Other
Which version of Auto-GPT are you using?
Latest Release
Do you use OpenAI GPT-3 or GPT-4?
GPT-3.5
Which area covers your issue best?
Installation and setup
Describe your issue.
Hi All !
I have been playing around with AutoGPT for a few days now, and it had been running quite well until yesterday afternoon. I am very much unsure what change but this is the error i keep getting now :
Not sure why , as It started fairly randomly. I am no developper whatsoever , so if anyone could help, I would very much appreciate it !! thanks
Upload Activity Log Content
No response
Upload Error Log Content
No response