Closed ksurl closed 2 years ago
Your understanding is correct. Can you describe your setup a little more? It sounds like you may have all of your services on one network. Best practice from a security perspective would be to isolate containers by creating networks to only allow communication where needed. In your case, I would add a separate network to connect tautulli to plex.
Similarly, if you have a shared database container (which I would advise against for security, but it definitely is more convenient), you could create a database
network where you connect all containers who need access to the db, and then use another trafficjam
instance to protect that network.
The firewall assumes you have this sort of isolation in place in order to keep it simple and the code understandable. Finer access control is something I'm definitely open to considering if there isn't an easy workaround by partitioning networks.
Understood. All my containers on the proxy network have web interface exposed by the proxy. Some do talk to each other as well such as plex. It seemed redundant to have them on 2 networks, but not if trafficjam is running to isolate everything on the proxy. Also something like a monitoring service would need full network access (I use uptime kuma).
Also something like a monitoring service would need full network access (I use uptime kuma).
Not necessarily! I also use uptime kuma, and I monitor my services by making requests through the reverse proxy. Meaning it checks nextcloud.mydomain.com
instead of localhost:<nextcloud_port>
or <nextcloud_docker_container_name>:<nextcloud_port>
(the latter accessing through the docker network instead of a port map). My rationale is that the reverse proxy will appropriately respond if the backend service is down (with an http 500 or similar), and uptime kuma will see that. If the reverse proxy is down... well then I'd start getting a lot of down notifications 😆
Of course if you still want to check status behind the proxy, you could create another trafficjam-protected network for uptime kuma and add all the containers to that. In an older version, trafficjam actually allowed you to specify multiple whitelist filters, so you could've potentially whitelisted reverse proxy and uptime kuma on the same network. I ended up removing it for simplicity.
so you can only allow one container to access (the proxy)? what about something like authelia that would interact with nginx for authentication
so you can only allow one container to access (the proxy)?
So the way it works is you specify the whitelist filter. Containers matching the whitelist filter have access to the whole network. All other containers do not. Specifically, the other containers aren't even able to initiate traffic to the whitelisted containers either. They can only respond.
what about something like authelia that would interact with nginx for authentication
In this case, nginx would be in your whitelist, but authelia doesn't have to be. In order to authenticate, nginx will initiate a request to authelia, which is permitted by the firewall since its on the whitelist. Similarly, authelia's response is permitted since it is a response to previously allowed traffic.
It actually just occurred to me that you can use labels to set your whitelist, so in your case, you could have reverse proxy + uptime kuma + all services on a single network, apply the label trafficjam_access
to both the reverse proxy and uptime kuma containers, then set up trafficjam with WHITELIST_FILTER="label=trafficjam_access"
, and both the reverse proxy and uptime kuma can access everything!
I can probably make the docs a little more clear about that
As I read it, it looks like if you whitelist a container, it can talk to anything on the managed network. What if you just need it to talk to one or few other containers as part of a service? e.g. tautulli needs to talk to plex. Set up a separate network for those?