TL;DR: An eponymous user per daemon and a shared group with a umask of 002. Consistent path definitions between all containers that maintains the folder structure. Using one volume (so the download folder and library folder are on the same file system) makes hardlinks and instant moves (atomic moves) possible for Sonarr, Radarr, Lidarr and Readarr. And most of all, ignore most of the Docker image’s path documentation!
Note: Many folks find TRaSH's Hardlink Tutorial helpful and easier to understand than this guide. This guide is more conceptual in nature while TRaSH's tutorial walks you through the process.
This article will not show you specifics about the best Docker setup, but it describes an overview that you can use to make your own setup the best that it can be. The idea is that you run each Docker container as its own user, with a shared group and consistent volumes so every container sees the same path layout. This is easy to say, but not so easy to understand and explain.
Reminder that many folks find TRaSH's Hardlink Tutorial helpful and easier to understand than this guide. This guide is more conceptual in nature while TRaSH's tutorial walks you through the process.
Ideally, each software runs as its own user and they are all part of a shared group with folder permissions set to 775 (drwxrwxr-x) and files set to 664 (-rw-rw-r--), which is a umask of 002. A sane alternative to this is a single shared user, which would use 755 and 644 which is a umask of 022. You can restrict permissions even more by denying read from “other”, which would be a umask of 007 for a user per daemon or 077 for a single shared user. For a deeper explanation, try the Arch Linux wiki articles about file permissions and attributes and UMASK.
Many Docker images accept -e UMASK=002 as an environment variable and some software can be configured with a user, group and umask (NZBGet) or folder/file permission (Sonarr/Radarr), inside the container. This will ensure that files and folders created by one can be read and written by the others. If you are using existing folders and files, you will need to fix their current ownership and permissions too, but going forward they will be correct because you set each software up right.
Many Docker images also take a -e PUID=123 and -e PGID=321 that lets you change the UID/GID used inside to that of an account on the outside. If you ever peak in, you’ll find that username is something like abc, nobody or hotio, but because it uses the UID/GID you pass in, on the outside it looks like the expected user. If you’re using storage from another system via NFS or CIFS, it will make your life easier if that system also has matching users and group. Perhaps let one system pick the UID/GIDs, then re-use those on the other system, assuming they don’t conflict.
You run Sonarr using hotio/sonarr, you’ve created a sonarr user with uid 123 and a shared group media with gid 321 which the sonarr user is a member of. You configure the Docker image to run with -e PUID=123 -e PGID=321 -e UMASK=002. Sonarr also lets you configure the user, group as well as folder and file permissions. The previous settings should negate these, but you could configure them if you wanted. An UMASK of 002 results in 775 (drwxrwxr-x) for folders and 664 (-rw-rw-r--) for files. and the user/group are a little tricky because inside the container, they have a different name. Typically they are abc or nobody.
Another popular and arguably easier option is a single, shared user. Perhaps even your user. It isn’t as secure and doesn’t follow best practices, but in the end it is easier to understand and implement. The UMASK for this is 022 which results in 755 (drwxr-xr-x) for folders and 644 (-rw-r--r--) for files. The group no longer really matters, so you’ll probably just use the group named after the user. This does make it harder to share with other users, so you may still end up wanting a UMASK of 002 even with this setup.
Don’t forget that your /config volume will also need to have correct ownership and permissions, usually the daemon’s user and that user’s group like sonarr:sonarr and a umask of 022 or 077 so only that user has access. In a single user setup, this would of course be the one user you’ve chosen.
Many folks find TRaSH's Hardlink Tutorial helpful and easier to understand than this guide. This guide is more conceptual in nature while TRaSH's tutorial walks you through the process.
The easiest and most important detail is to create unified path definitions across all the containers.
If you’re wondering why hardlinks aren’t working or why a simple move is taking far longer than it should, this section explains it. The paths you use on the inside matter. Because of how Docker’s volumes work, passing in two volumes such as the commonly suggested /tv, /movies, and /downloads makes them look like two different file systems, even if they are a single file system outside the container. This means hardlinks won’t work and instead of an instant/atomic move, a slower and more IO intensive copy+delete is used. If you have multiple download clients because you’re using torrents and usenet, having a single /downloads path means they’ll be mixed up. Because the Radarr in one container will ask the NZBGet in its own container where files are, using the same path in both means it will all just work. If you don’t, you’d need to fix it with a remote path map.
So pick one path layout and use it for all of them. It's suggested to use /data, but there are other common names like /shared, /media or /dvr. Keeping this the same on the outside and inside will make your setup simpler: one path to remember or if integrating Docker and native software. For example, Synology might use /Volume1/data and unRAID might use /mnt/user/data on the outside, but /data on the inside is fine.
It is also important to remember that you’ll need to setup or re-configure paths in the software running inside these Docker containers. If you change the paths for your download client, you’ll need to edit its settings to match and likely update existing torrents.. If you change your library path, you’ll need to change those settings in Sonarr, Radarr, Lidarr, Plex, etc.
What matters here is the general structure, not the names. You are free to pick folder names that make sense to you. And there are other ways of arranging things too. For example, you’re not likely to download and run into conflicts of identical releases between usenet and torrents, so you could put both in /data/downloads/{movies|books|music|tv} folders. Downloads don’t even have to be sorted into subfolders either, since movies, music and tv will rarely conflict.
This example data folder has subfolders for torrents and usenet and each of these have subfolders for tv, movie and music downloads to keep things neat. The media folder has nicely named tv, movies, books, and music subfolders. This media folder is your library and what you’d pass to Plex, Kodi, Emby, Jellyfin, etc.
For the below example data is equivalent to the host path /host/data and the docker path /data
data
├── torrents
│ ├── movies
│ ├── music
| ├── books
│ └── tv
├── usenet
│ ├── movies
│ ├── music
│ ├── books
│ └── tv
└── media
├── movies
├── music
├── books
└── tv
Copy
The path for each Docker container can be as specific as needed while still maintaining the correct structure:
data
└── torrents
├── movies
├── music
├── books
└── tv
Copy
Torrents only needs access to torrent files, so pass it -v /host/data/torrents:/data/torrents. In the torrent software settings, you’ll need to reconfigure paths and you can sort into subfolders like/data/torrents/{tv|books|movies|music}.
Usenet only needs access to usenet files, so pass it -v /host/data/usenet:/data/usenet. In the usenet software settings, you’ll need to reconfigure paths and you can sort into subfolders like/data/usenet/{tv|movies|music}.
Plex/Emby only needs access to your media library, so pass -v /host/data/media:/data/media, which can have any number of subfolders like movies, kids movies, tv, documentary tv and/or music as sub folders.
data
├── torrents
│ ├── movies
│ ├── music
│ └── tv
├── usenet
│ ├── movies
│ ├── music
│ └── tv
└── media
├── movies
├── music
└── tv
Copy
Sonarr, Radarr and Lidarr get everything using -v /host/data:/data because the download folder(s) and media folder will look like and be one file system. Hard links will work and moves will be atomic, instead of copy + delete.
There are a couple minor issues with not following the Docker image’s suggested paths.
The biggest is that volumes defined in the dockerfile will get created if they’re not specified, this means they’ll pile up as you delete and re-create the containers. If they end up with data in them, they can consume space unexpectedly and likely in an unsuitable place. You can find a cleanup command in the helpful commands section below. This could also be mitigated by passing in an empty folder for all the volumes you don’t want to use, like /data/empty:/movies and /data/empty:/downloads. Maybe even put a file named DO NOT USE THIS FOLDER inside, to remind yourself.
Another problem is that some images are pre-configured to use the documented volumes, so you’ll need to change settings in the software inside the Docker container. Thankfully, since configuration persists outside the container this is a one time issue. You might also pick a path like /data or /media which some images already define for a specific use. It shouldn’t be a problem, but will be a little more confusing when combined with the previous issues. In the end, it is worth it for working hard links and fast moves. The consistency and simplicity are welcome side effects as well.
If you use the latest version of the abandoned RadarrSync to synchronize two Radarr instances, it depends on mapping the same path inside to a different path on the outside, for example /movies for one instance would point at /data/media/movies and the other at /data/media/movies4k. This breaks everything you’ve read above. There is no good solution, you either use the old version which isn’t as good, do your mapping in a way that is ugly and breaks hard links or just don’t use it at all.
This is the best option for most users, it lets you control and configure many containers and their interdependence in one file. A good starting place is Docker’s own Get started with Docker Compose. You can use composerize or ghcr.io/red5d/docker-autocompose to convert docker run commands into a single docker-compose.yml file.
The below is not a complete working example! The containers only have PID, UID, UMASK and example paths defined to keep it simple.
Like the Docker Compose example above, the following docker run commands are stripped down to only the PUID, PGID, UMASK and volumes in order to act as an obvious example.
For maintaining a few Docker containers just using systemd is an option. It standardizes control and makes dependencies simpler for both native and Docker services. The generic example below can be adapted to any container by adjusting or adding the various values and options.
Remove unused containers, networks, volumes, images and build cache. As the WARNING this command gives says, this will remove all of the previously mentioned items for anything not in use by a running container. In a correctly configured environment, this is fine. But be aware and proceed cautiously the first time. See the Docker system prune documentation for more details.
Getting a docker-compose.yml from running instances is possible with docker-autocompose, in case you’ve already started your containers with docker run or docker create and want to change to docker-compose style. It is also great for sharing your settings with others, since it doesn’t matter what management software you’re using. The last argument(s) are your container names and you can pass in as many as needed at the same time. The first container name is required, more are optional. You can see container names in the NAMES column of docker ps, they're usually set by you or might be generated based on the image like binhex-qbittorrent. It is not the image name, like binhex/arch-qbittorrentvpn.
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/red5d/docker-autocompose $CONTAINER_NAME $ANOTHER_CONTAINER_NAME ... $ONE_MORE_CONTAINER_NAME
Most Docker images don’t have many useful tools in them for troubleshooting, but you can attach a network troubleshooting type image to an existing container to help with that.
docker run -it --rm --network container:CONTAINER_NAME nicolaka/netshoot
chmod -R a=,a+rX,u+w,g+w /some/path/here
^ ^ ^ ^ adds write to group
| | | adds write to user
| | adds read to all and execute to all folders (which controls access)
| sets all to `000`
hotio’s The documentation and Dockerfile don’t make any poor path suggestions. Images are automatically updated 2x in 1 hour if upstream changes are found. Hotio also builds our Pull Requests (except Sonarr) which may be useful for testing.
One interesting feature of a custom Docker network is that it gets its own DNS server. If you create a bridge network for your containers, you can use their hostnames in your configuration. For example, if you docker run --network=isolated --hostname=deluge binhex/arch-deluge and docker run --network=isolated --hostname=radarr binhex/arch-radarr, you can then configure the Download Client in Radarr to point at just deluge and it’ll work and communicate on its own private network. Which means if you wanted to be even more secure, you could stop forwarding that port too. If you put your reverse proxy container on the same network, you can even stop forwarding the web interface ports and make them even more secure.
Many people read this and think they understand, but they end up seeing the outside path correctly to something like /data/usenet, but then they miss the point and set the inside path to /downloads still.
Good:
/host/data/usenet:/data/usenet
/data/media:/data/media
Bad:
/host/data:/downloads
/host/data:/media
/data/downloads:/data
¶ Running Docker containers as root or changing users around
If you find yourself running your containers as root:root, you’re doing something wrong. If you’re not passing in a UID and GID, you’ll be using whatever the default is for the image and that will be unlikely to line up with a reasonable user on your system. And if you’re changing the user and group your Docker containers are running as, you’ll probably end up with permissions issues on folders like the /config folder which will likely have files and folders in them that got created the first time with the UID/GID you used the first time.
If you find yourself setting a UMASK of 000 (which is 777 for folders and 666 for files), you’re also doing something wrong. It leaves your files and folders read/write to everyone, which is poor Linux hygiene.
Docker Guide | WikiArr
WikiArr
Search...
Browse
[
Home
](/)
Media Automation
[
Radarr
](/en/radarr)[
Sonarr
](/en/sonarr)[
Lidarr
](/en/lidarr)[
Readarr
](/en/readarr)
Indexer Management
[
Prowlarr
](/en/prowlarr)
Contribute
[
Donate
](/en/donate)
Report a Bug
Suggest a Feature
/ docker-guide
Docker Guide
Page Contents
Table of Contents
The Best Docker Setup
Portainer
Introduction
Multiple users and a shared group
Permissions
UMASK
PUID and PGID
Example
Single user and optional shared group
Ownership and permissions of /config
Consistent and well planned paths
Examples
Issues
Running containers using
Docker Compose
docker run
Systemd
Helpful commands
List running containers
Shell inside a container
Prune Docker
Get docker run command
Get docker-compose
Troubleshoot networking
Recursively chown user and group
Recursively chmod to 775/664
Find UID/GID for user
Examine files for hard links
Interesting Docker Images
Custom Docker Network and DNS
Common Problems
Correct outside paths, incorrect inside paths
Running Docker containers as root or changing users around
Running Docker containers with umask 000
Getting Help
Chat Support (Discord)
Forum Support (Reddit)
Last edited by
Administrator
02/06/2022
¶ Table of Contents
¶ The Best Docker Setup
TL;DR: An eponymous user per daemon and a shared group with a umask of
002
. Consistent path definitions between all containers that maintains the folder structure. Using one volume (so the download folder and library folder are on the same file system) makes hardlinks and instant moves (atomic moves) possible for Sonarr, Radarr, Lidarr and Readarr. And most of all, ignore most of the Docker image’s path documentation!¶ Portainer
See this Docker Guide and TRaSH's Docker Tutorial instead for how to setup Docker Compose.
¶ Introduction
This article will not show you specifics about the best Docker setup, but it describes an overview that you can use to make your own setup the best that it can be. The idea is that you run each Docker container as its own user, with a shared group and consistent volumes so every container sees the same path layout. This is easy to say, but not so easy to understand and explain.
¶ Multiple users and a shared group
¶ Permissions
Ideally, each software runs as its own user and they are all part of a shared group with folder permissions set to
775
(drwxrwxr-x
) and files set to664
(-rw-rw-r--
), which is a umask of002
. A sane alternative to this is a single shared user, which would use755
and644
which is a umask of022
. You can restrict permissions even more by denying read from “other”, which would be a umask of007
for a user per daemon or077
for a single shared user. For a deeper explanation, try the Arch Linux wiki articles about file permissions and attributes and UMASK.¶ UMASK
Many Docker images accept
-e UMASK=002
as an environment variable and some software can be configured with a user, group and umask (NZBGet) or folder/file permission (Sonarr/Radarr), inside the container. This will ensure that files and folders created by one can be read and written by the others. If you are using existing folders and files, you will need to fix their current ownership and permissions too, but going forward they will be correct because you set each software up right.¶ PUID and PGID
Many Docker images also take a
-e PUID=123
and-e PGID=321
that lets you change the UID/GID used inside to that of an account on the outside. If you ever peak in, you’ll find that username is something likeabc
,nobody
orhotio
, but because it uses the UID/GID you pass in, on the outside it looks like the expected user. If you’re using storage from another system via NFS or CIFS, it will make your life easier if that system also has matching users and group. Perhaps let one system pick the UID/GIDs, then re-use those on the other system, assuming they don’t conflict.¶ Example
You run Sonarr using hotio/sonarr, you’ve created a
sonarr
user with uid123
and a shared groupmedia
with gid321
which thesonarr
user is a member of. You configure the Docker image to run with-e PUID=123 -e PGID=321 -e UMASK=002
. Sonarr also lets you configure the user, group as well as folder and file permissions. The previous settings should negate these, but you could configure them if you wanted. An UMASK of002
results in775
(drwxrwxr-x
) for folders and664
(-rw-rw-r--
) for files. and the user/group are a little tricky because inside the container, they have a different name. Typically they areabc
ornobody
.¶ Single user and optional shared group
Another popular and arguably easier option is a single, shared user. Perhaps even your user. It isn’t as secure and doesn’t follow best practices, but in the end it is easier to understand and implement. The UMASK for this is
022
which results in755
(drwxr-xr-x
) for folders and644
(-rw-r--r--
) for files. The group no longer really matters, so you’ll probably just use the group named after the user. This does make it harder to share with other users, so you may still end up wanting a UMASK of002
even with this setup.¶ Ownership and permissions of /config
Don’t forget that your
/config
volume will also need to have correct ownership and permissions, usually the daemon’s user and that user’s group likesonarr:sonarr
and a umask of022
or077
so only that user has access. In a single user setup, this would of course be the one user you’ve chosen.¶ Consistent and well planned paths
The easiest and most important detail is to create unified path definitions across all the containers.
If you’re wondering why hardlinks aren’t working or why a simple move is taking far longer than it should, this section explains it. The paths you use on the inside matter. Because of how Docker’s volumes work, passing in two volumes such as the commonly suggested
/tv
,/movies
, and/downloads
makes them look like two different file systems, even if they are a single file system outside the container. This means hardlinks won’t work and instead of an instant/atomic move, a slower and more IO intensive copy+delete is used. If you have multiple download clients because you’re using torrents and usenet, having a single/downloads
path means they’ll be mixed up. Because the Radarr in one container will ask the NZBGet in its own container where files are, using the same path in both means it will all just work. If you don’t, you’d need to fix it with a remote path map.So pick one path layout and use it for all of them. It's suggested to use
/data
, but there are other common names like/shared
,/media
or/dvr
. Keeping this the same on the outside and inside will make your setup simpler: one path to remember or if integrating Docker and native software. For example, Synology might use/Volume1/data
and unRAID might use/mnt/user/data
on the outside, but/data
on the inside is fine.It is also important to remember that you’ll need to setup or re-configure paths in the software running inside these Docker containers. If you change the paths for your download client, you’ll need to edit its settings to match and likely update existing torrents.. If you change your library path, you’ll need to change those settings in Sonarr, Radarr, Lidarr, Plex, etc.
¶ Examples
What matters here is the general structure, not the names. You are free to pick folder names that make sense to you. And there are other ways of arranging things too. For example, you’re not likely to download and run into conflicts of identical releases between usenet and torrents, so you could put both in
/data/downloads/{movies|books|music|tv}
folders. Downloads don’t even have to be sorted into subfolders either, since movies, music and tv will rarely conflict.This example
data
folder has subfolders for torrents and usenet and each of these have subfolders for tv, movie and music downloads to keep things neat. Themedia
folder has nicely namedtv
,movies
,books
, andmusic
subfolders. Thismedia
folder is your library and what you’d pass to Plex, Kodi, Emby, Jellyfin, etc.For the below example
data
is equivalent to the host path/host/data
and the docker path/data
Copy
The path for each Docker container can be as specific as needed while still maintaining the correct structure:
¶ Torrents
Copy
Torrents only needs access to torrent files, so pass it
-v /host/data/torrents:/data/torrents
. In the torrent software settings, you’ll need to reconfigure paths and you can sort into subfolders like/data/torrents/{tv|books|movies|music}
.¶ Usenet
Copy
Usenet only needs access to usenet files, so pass it
-v /host/data/usenet:/data/usenet
. In the usenet software settings, you’ll need to reconfigure paths and you can sort into subfolders like/data/usenet/{tv|movies|music}
.¶ Media Server
Copy
Plex/Emby only needs access to your media library, so pass
-v /host/data/media:/data/media
, which can have any number of subfolders likemovies
,kids movies
,tv
,documentary tv
and/ormusic
as sub folders.¶ Sonarr, Radarr and Lidarr
Copy
Sonarr, Radarr and Lidarr get everything using
-v /host/data:/data
because the download folder(s) and media folder will look like and be one file system. Hard links will work and moves will be atomic, instead of copy + delete.¶ Issues
There are a couple minor issues with not following the Docker image’s suggested paths.
The biggest is that volumes defined in the
dockerfile
will get created if they’re not specified, this means they’ll pile up as you delete and re-create the containers. If they end up with data in them, they can consume space unexpectedly and likely in an unsuitable place. You can find a cleanup command in the helpful commands section below. This could also be mitigated by passing in an empty folder for all the volumes you don’t want to use, like/data/empty:/movies
and/data/empty:/downloads
. Maybe even put a file namedDO NOT USE THIS FOLDER
inside, to remind yourself.Another problem is that some images are pre-configured to use the documented volumes, so you’ll need to change settings in the software inside the Docker container. Thankfully, since configuration persists outside the container this is a one time issue. You might also pick a path like
/data
or/media
which some images already define for a specific use. It shouldn’t be a problem, but will be a little more confusing when combined with the previous issues. In the end, it is worth it for working hard links and fast moves. The consistency and simplicity are welcome side effects as well.If you use the latest version of the abandoned RadarrSync to synchronize two Radarr instances, it depends on mapping the same path inside to a different path on the outside, for example
/movies
for one instance would point at/data/media/movies
and the other at/data/media/movies4k
. This breaks everything you’ve read above. There is no good solution, you either use the old version which isn’t as good, do your mapping in a way that is ugly and breaks hard links or just don’t use it at all.¶ Running containers using
¶ Docker Compose
This is the best option for most users, it lets you control and configure many containers and their interdependence in one file. A good starting place is Docker’s own Get started with Docker Compose. You can use composerize or ghcr.io/red5d/docker-autocompose to convert
docker run
commands into a singledocker-compose.yml
file.Copy
¶ Update all images and containers
Copy
¶ Update individual image and container
Copy
¶ docker run
Copy
¶ Systemd
For maintaining a few Docker containers just using systemd is an option. It standardizes control and makes dependencies simpler for both native and Docker services. The generic example below can be adapted to any container by adjusting or adding the various values and options.
Copy
¶ Helpful commands
¶ List running containers
Copy
¶ Shell inside a container
Copy
For more information, see the docker exec documentation.
¶ Prune Docker
Copy
¶ Get docker run command
Getting the
docker run
command from GUI managers can be hard, this Docker image makes it easy for a running container (source).Copy
¶ Get docker-compose
Getting a
docker-compose.yml
from running instances is possible with docker-autocompose, in case you’ve already started your containers withdocker run
ordocker create
and want to change todocker-compose
style. It is also great for sharing your settings with others, since it doesn’t matter what management software you’re using. The last argument(s) are your container names and you can pass in as many as needed at the same time. The first container name is required, more are optional. You can see container names in the NAMES column ofdocker ps
, they're usually set by you or might be generated based on the image likebinhex-qbittorrent
. It is not the image name, likebinhex/arch-qbittorrentvpn
.Copy
For some users this could be:
Copy
¶ Troubleshoot networking
Most Docker images don’t have many useful tools in them for troubleshooting, but you can attach a network troubleshooting type image to an existing container to help with that.
Copy
¶ Recursively chown user and group
Copy
¶ Recursively chmod to 775/664
Copy
¶ Find UID/GID for user
Copy
¶ Examine files for hard links
Copy
¶ Interesting Docker Images
¶ Custom Docker Network and DNS
One interesting feature of a custom Docker network is that it gets its own DNS server. If you create a bridge network for your containers, you can use their hostnames in your configuration. For example, if you
docker run --network=isolated --hostname=deluge binhex/arch-deluge
anddocker run --network=isolated --hostname=radarr binhex/arch-radarr
, you can then configure the Download Client in Radarr to point at justdeluge
and it’ll work and communicate on its own private network. Which means if you wanted to be even more secure, you could stop forwarding that port too. If you put your reverse proxy container on the same network, you can even stop forwarding the web interface ports and make them even more secure.¶ Common Problems
¶ Correct outside paths, incorrect inside paths
Many people read this and think they understand, but they end up seeing the outside path correctly to something like
/data/usenet
, but then they miss the point and set the inside path to/downloads
still./host/data/usenet:/data/usenet
/data/media:/data/media
/host/data:/downloads
/host/data:/media
/data/downloads:/data
¶ Running Docker containers as root or changing users around
If you find yourself running your containers as
root:root
, you’re doing something wrong. If you’re not passing in a UID and GID, you’ll be using whatever the default is for the image and that will be unlikely to line up with a reasonable user on your system. And if you’re changing the user and group your Docker containers are running as, you’ll probably end up with permissions issues on folders like the/config
folder which will likely have files and folders in them that got created the first time with the UID/GID you used the first time.¶ Running Docker containers with umask 000
If you find yourself setting a UMASK of
000
(which is 777 for folders and 666 for files), you’re also doing something wrong. It leaves your files and folders read/write to everyone, which is poor Linux hygiene.¶ Getting Help
¶ Chat Support (Discord)
¶ Forum Support (Reddit)
Powered by Wiki.js
Browse by Tags
▲
x▼ https://wiki.servarr.com/docker-guide