Ezarr is a project built to make it EZ to deploy a Servarr mediacenter on an Ubuntu server. The badge above means that the shell script and docker-compose file in this repository at least don't crash. It doesn't necessarily mean it will run well on your system ;) It's set up to follow the TRaSH guidelines so it should at least perform optimally. It features:
Currently this script only works on Linux. There is a chance that the sample docker compose file will work on Windows, although untested. The only requirements other than that are Python 3 and docker with docker-compose-v2. While this script may work on docker-compose-v1 it's made to be and highly recommended to be run using v2. The easiest way to install these dependencies on Ubuntu and other Debian-based distors is by running:
sudo apt-get install python3 docker.io docker-compose-v2
For other Linux distros you may have to use a different package manager or download directly from docker's website.
To make things easier, a CLI has been developed. First, clone the repository in a directory of your
choosing. You can run it by entering python3 main.py
and the CLI will guide you through the
process. This is the recommended method if you're setting this up for the first time on a new system.
Please take a look at important notes before you continue.
NOTE: This script will create users for each container with IDs ranging from 13001 to 13014.
If you want to choose your own IDs (or some of them are occupied) you have to go through the manual install.
If you're installing this for the first time simply follow these steps.
If you're coming from an older version or reinstalling with different IDs, run remove_old_users.sh
to clean up old users and then follow these steps.
git clone https://github.com/Luctia/ezarr.git
.env.sample
to a real .env
by running $ cp .env.sample .env
.ROOT_DIR
as this is where everything is going to be stored in.
The path in this value needs to be absolute. If you leave it empty it's going to install in the directory the .env file is currently in.
UID
should be set to the ID of the user that you want to run docker with. You can find this by running id -u
from that user's shell.setup.sh
as superuser. This will set up your users, a system of directories and ensure permissions are set correctly.docker-compose.yml.sample
to a real docker-compose.yml
by running $ cp docker-compose.yml.sample docker-compose.yml
.docker-compose.yml
file. If there are services you would like to ignore
(for example, running PleX and Jellyfin at the same time is a bit unusual), you can comment them
out by placing #
in front of the lines. This ensures they are ignored by Docker compose.
Double check that your .env file is set up properly.docker compose up -d
to start the containers. If it complains about permissions run the following commands to add your current user to the docker group and apply changes:
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
If it still doesn't work reboot your system.
That's it! Your containers are now up and you can continue to set up the settings in them. Please take a look at important notes before you continue.
sudo
to set up the permissions and folder structures, but you shouldn't run it as root.remove_old_users.sh
.
This is also recommended if you are updating from an earlier version of this script, since there were previously some conflicts in user IDs.localhost
.true
/data/media/
and then tv, movies or music depending on serviceadmin
and a one-time password that can be viewed by running docker logs qbittorrent
.radarr
category
to /data/torrents/movies
. You should do this. Also set the Default Save Path
to
/data/torrents
. Set "Run external program on torrent completion" to true and enter this in the
field: chmod -R 775 "%F/"
.setup.sh
on both your NFS server and your client.
On your server:.env
and setup.sh
to your NFS server..env
so that ROOT_DIR
reflects where it will be stored on your server, which is most likely different from the mapped location on the client..env
file is not a .sample. Run setup.sh
..env
is set correctly, especially ROOT_DIR
.
You don't have to do this on your server first but it's recommended. If you are running this script on the client make sure that you temporarily enable -no-root-squash on your NFS server,
as the script needs superuser privileges to run and by default on NFS the root user is mapped to nowhere to prevent abuse.When you're trying to access SABnzbd the first time you'll come across the message External internet access denied
. To fix this simple modify the sabnzbd.ini
and change inet_exposure
to
4
, restart the docker container for sabnzbd (docker restart sabnzbd
) and now you can access the
UI of SABnzbd (note: you may get a Access denied - Hostname verification failed
, to fix this,
simply go to the IP of your server directly instead of the hostname). After accessing the UI don't
forget to set a username and password (https://sabnzbd.org/wiki/configuration/3.7/general,
section Security).
For more instructions or help see also https://sabnzbd.org/wiki/extra/access-denied.html on the official SABnzbd website.
There is an update_containers.sh
script that takes care of this. Simply run it and it updates
all containers and removes old images. If you want to keep them, simply comment out the last line of the script.
It's essentially the following steps but automated:
If you'd like to it manually, go to the directory of your docker-compose.yml
file
and run (sudo) docker compose pull
. This pulls the newest versions of all images (blueprints for
containers) listed in the docker-compose.yml
file. Then, you can run (sudo) docker compose up -d
. This will deploy the new versions without losing uptime. Afterwards, you can run (sudo) docker image prune
to remove the old images, freeing up space.
Some settings, particularly for the Servarr suite, are set in databases. While it might be possible to interact with this database after creation, I'd rather not touch these. It's not that difficult to set them yourself, and quite difficult to do it automatically. For other containers, configuration files are automatically generated, so these are more easily edited, but I currently don't believe this is worth the effort.
On top of the above, connecting the containers above would mean setting a password and creating an API key for all of them. This would lead to everyone using Ezarr having the same API key and user/ password combination. Personally, I'd rather trust users to figure this out on their own rather than trusting them to change these passwords and keys.