Welcome to the Auto-GPT-DockerSetup repository! This project aims to provide an easy-to-use starting point for users who want to run Auto-GPT using Docker. This setup separates runtime configuration from the actual Auto-GPT repository by providing a Docker Compose file tailored to running an instance from a Docker image. The official Docker Compose file in the Auto-GPT source includes building the image, but with it always building, not mounting, no solution for plugin-requirements its not very pleasant to maintain.
Note: This template is designed to make it easier for users to get started with Auto-GPT and Docker. It is assumed that users have some experience with Docker and an understanding of what they're doing.
docker-compose.yml
file for running Auto-GPT in a Docker container.personas
folder with an example YAML file. Users should replace this with their own persona YAML files.clone this repository (or download as zip)
remove the .git
folder (ideally git init . for your own)
copy your .env file from auto-gpt, if you don't have one yet, use their template
while you can setup a different memory backend, the easiest setup with long term memory is using embedded weaviate, to do so, you can configure it in your .env file as follows:
MEMORY_BACKEND=weaviate
WEAVIATE_HOST="127.0.0.1"
WEAVIATE_PORT=8080
WEAVIATE_PROTOCOL="http"
USE_WEAVIATE_EMBEDDED=True
WEAVIATE_EMBEDDED_PATH="/app/aviate"
if you want to run any plugins
Now we get to the docker part:
autogpt-upstream
, you can do that by cloning their repo and running docker build -t autogpt-upstream .
in the root of the repomy-own-autogpt
, so the next step is to build it: docker build -t my-own-autogpt .
let the bot battle begin: docker-compose run --rm auto-gpt -C personas/Entrepeneur-GPT.yml