Open binaryfire opened 5 years ago
I assume the same workaround applies to ufw.
Yes, it does!
So what's the current best way to fix this, as Docker in their wisdom refuse to fix this clearly broken design?
https://github.com/capnspacehook/whalewall or https://github.com/chaifeng/ufw-docker?
So what's the current best way to fix this, as Docker in their wisdom refuse to fix this clearly broken design?
The best way to fix this is not to use the ports
argument of Docker or only expose the ports locally. Either by specifying the interface address that should be listened on or by using expose
:
ports:
- "127:0.0.1:8000:8000"
or
expose:
- "8000"
How does that compare to the other two solutions? I think one of the problems with all this is that not everyone is a Linux networking/firewall specialist or wants to be one just to use containers. So this makes it difficult to understand when to use one of those two solutions, or this third solution of simply not using ports
argument.
This is all compounded by seemingly dozens of solutions or variations of solutions in this issue, the linked issues, and the internet.
How does that compare to the other two solutions?
Binding to localhost (127.0.0.1) - regardless of port - should not directly expose the container to anything outside the system upon which the container is running.
You’d need to specifically set up something like a reverse proxy to route traffic to/from the container. Otherwise, the container would only be accessible from the local system.
only expose the ports locally
expose does nothing. It is documentation in your compose file of what port your container operates on. It does not actually change anything.
The suggestion is simply not to map ports to your LAN and use a reverse proxy to access services outside of the local host. If you want to access it on the local host you can specify the local IP.
absent the above, it needs custom firewall rules or a change of logic from Docker
So whalewall is a valid solution then, possibly?
So whalewall is a valid solution then, possibly?
No idea, I've not read it. A valid solution for me, which I've been using for years (see my original comment above from 2020 https://github.com/docker/for-linux/issues/690#issuecomment-720763186), is what I and others mentioned earlier.
Bind the container to localhost and it cannot be accessed from outwith the local system in the absence of a reverse proxy (or similar mechanism).
You can define that in your compose file (if you use that), or even as an argument in the command to fire up a container.
You want to ensure you have the localhost IP 127:0.0.1
explicitly specified, followed by whatever port you like, e.g., 127.0.0.1:8000
. Note, connections to that port (in this example port 8000) are ONLY then available to the system running the container. To connect to it from outside, you'd need to set up a reverse proxy, like NGINX, to do something like forward requests coming into your server on port 443 to 127.0.0.1:8000.
You can also do things like restricting access to the container to your local network of course, or a specific IP. But that's outwith the scope of this thread. Specifically: binding to localhost will ensure your container is as secure as the rest of the system upon which it's running; it will be inaccessible to anything else unless you specifically do something to make it available.
(Naturally you'd also need to ensure any reverse proxy etc is properly configured and that your firewalls are configured to drop any incoming packets with a source and destination defined as 127.0.0.1; by default they should be).
Disclaimer: I'm not a network security expert, but I was running containers like this for years and didn't notice any issues. If you're running anything critical, then you should employ a network/cyber security professional to ensure your systems are secure in any case.
@UplandsDynamic I just came across this issue this week while trying to migrate between reverse proxy managers. I just read the thread to refresh myself.
The problem is that one of the most popular proxy managers nginx proxy manager uses a docker container that relies on 3 ports being forwarded in order to work. Its a catch 22 where if you want to use ufw to manage firewall rules, the best answer is to locally bind docker containers and use a reverse proxy to access them. But in order to use a reverse proxy, you need a docker container to be accessible from outside localhost.
Since it is just the 2 ports, you could just forward those manually in the iptables and go ahead with this workaround. But, this essentially disables the use of streams in nginx proxy manager, where its like a reverse proxy but for ports instead of webservers.
In these cases, I think the best workaround is to just edit /etc/docker/daemon.json and set iptables to false. This supposedly risks internet connection to the containers, but I have tested it and they all seem to work fine. I think if they get updated though or a new container gets added, I might have to temporarily enable iptables in daemon.json
It would really be nice though if the docker team could provide an official workaround with docker . This has been an issue for almost 5 years now with no real solution.
Disabling iptables will lead to other serious problems though…
https://stackoverflow.com/questions/30383845/what-is-the-best-practice-of-docker-ufw-under-ubuntu
The problem is that one of the most popular proxy managers nginx proxy manager
@Masong19hippows If you're using a Docker container for your reverse proxy, why can't you just allow that container to bind to 0.0.0.0 (or a specific IP for your requirements)? You'd then configure your NGINX container to forward requests to the correct connected containers (presumably via the docker internal network), in the usual way, using the NGINX config files.
I've not done that myself, however, so perhaps you're hitting issues I'm not aware of. I have no need nor desire for containerisation of 'all the things'. I just run forward proxies (NGINX) servers on the OS, (or other machines connected by overlay networks).
I do not allow Docker containers to bind to anything other than the machine they're running on (localhost), period. I just don't trust it & never really have. Allowing direct external connections in principle just needlessly broadens the potential attack surface.
I prefer to route incoming connections to any service running on any of my servers (virtual & physical) though dedicated, gateways (reverse proxies). That works for me, but I realise that might not be suitable for everyone's needs.
@UplandsDynamic I have binded it to 0.0.0.0. This bypasses ufw rules though and renders it useless for reasons above.
You are right where setting iptables to false might introduce other issues. I was originally using fail2ban with ufw to automagically block ips from accessing the server at all. I just gave up and switched the action to use iptable rules instead of ufw and that seems to work. I didn't want to do this originally though because ufw makes it easier for management purposes.
I still think an official solution from docker would be better though. For compatibility between docker and ufw, I had to stop using ufw lol.
Although this is obviously a design flaw of Docker, I agree to @UplandsDynamic and don’t recommend any “workaround” suggested above, regarding uninstalling IPtables or UFW. Any production environment should have a network firewall before Docker containers regardless. It’s common sense to restrict port bindings to 127.0.0.1 and employ a NGFW as an additional security measure.
For the above mentioned reverse proxy scenario it’s pretty simple to bind all ports of the services to 127.0.0.1 and for the proxy itself to 0.0.0.0.
@Masong19hippows I don't really understand what you're trying to do. If you have bound your NGINX container to 0.0.0.0, and only exposed the relevant ports on that container, then what's the issue exactly?
You'd want connections to that container (on its exposed ports) to get though your firewall in any case, to receive incoming connections. None of your other containers would be exposed outside the system they're hosted on, unless, for example, you explicitly exposed their ports and connected them to your NGINX container over the docker internal network.
As I said, personally I wouldn't use a container for the reverse proxy in any case. I'd bind everything to 127.0.0.1 and then have NGINX (or whatever else) forward traffic where you need it.
@UplandsDynamic
The problem is with fail2ban and what it does to firewall rules. What fail2ban does is provide protection against brute force attacks. It provides an extra layer of security by editing firewall rules to deny from specific IP addresses. What you are talking about is just using the firewall for ports and not for denying traffic from certains IPs.
If you are hosting a webpage that needs auth at ex.domain.com and a specific IP address accessing it fails the authentication for a specific amount of times in a dedicated time window, then fail2ban sets a firewall rule that denies traffic from that specific IP address to *domain.com. This prevents another unauthorized login attempt from that IP to ex.domain.com
So before I switched to npm, I had fail2ban setup with ufw so that it would insert a dent rule for that specific IP address and that's how it would block it. I can't do that though with NPM because it's in a docker container and doesn't follow ufw rules, including the deny rules inserted by fail2ban. I wanted ufw to be the master firewall rule table to where nothing could bypass it. Does this make sense?
I also didn't uninstall ufw. I just switched fail2ban to use iptables for inserting these deny rules instead of ufw so that they would actually be followed. I am following exactly what you said and have been binding docker containers to localhost and only exposing the proxy manager to outside networks.
As some have mentioned previously here, it's not that Docker or the moby project are completely unaware of or ignoring the issue:
What the hell... I'm currently dealing with immense brute force attacks on my simple blog, and I was relying on my ufw rate limit until I realised it wasn't working and that's why my server keeps crashing.
The topic seems to be a rabbit hole when you look at this thread. As a developer, I'm a bit shocked by this default behaviour of Docker and don't really know what to do.
Does anyone have a quick fix for rate limiting with 80/tcp and 443/tcp on my (Docker) reverse proxy? Or is there no easy answer here?
P.S. This ufw-docker project and its unanswered issues are too fishy for me.
What the hell... I'm currently dealing with immense brute force attacks on my simple blog, and I was relying on my ufw rate limit until I realised it wasn't working and that's why my server keeps crashing.
The topic seems to be a rabbit hole when you look at this thread. As a developer, I'm a bit shocked by this default behaviour of Docker and don't really know what to do.
Does anyone have a quick fix for rate limiting with 80/tcp and 443/tcp on my (Docker) reverse proxy? Or is there no easy answer here?
P.S. This ufw-docker project and its unanswered issues are too fishy for me.
Docker seems to take an approach that views a firewall as only allowing/denying ports. I think the reason this chain has stayed open so long with no real solution is that the team doesn't understand the abilities of a firewall and that replacing a very specific part of the firewall doesn't solve the issue of the firewall not working. I think it just comes from a lack of understanding rather than from a "fishy" stance.
My solution was to just use the host network instead of a natted docker network. This seems to make docker use the ufw rules. I believe this is just "network_mode: 'host'" in a docker compose file.
Again, the reason this thread is still open is because it is no longer being actively monitored by the Docker / Moby team.
This is a LEGACY issue tracker. This repo readme states:
This is a legacy issue tracker to manage issues related with Docker Engine for Linux. To report an issue or request a new feature please refer to the upstream Moby Project, or Docker Deskop for Linux in case you are running Docker Desktop.
It further advises:
Please report any security issues or vulnerabilities responsibly to the Docker security team. Please do not use the public issue tracker.
As @kernstock relates in his comment of 7 May, there are related issues open on the Moby project GitHub account.
Again, the reason this thread is still open is because it is no longer being actively monitored by the Docker / Moby team.
This is a LEGACY issue tracker. This repo readme states:
This is a legacy issue tracker to manage issues related with Docker Engine for Linux. To report an issue or request a new feature please refer to the upstream Moby Project, or Docker Deskop for Linux in case you are running Docker Desktop.
It further advises:
Please report any security issues or vulnerabilities responsibly to the Docker security team. Please do not use the public issue tracker.
As @kernstock relates in his comment of 7 May, there are related issues open on the Moby project GitHub account.
The linked issue has been open for a year with someone asking for an update 6 months ago with no answer. I understand this might not be the correct place, but you aren't suggesting anything better if you actually want a response from people.
This issue and the one described in the following moby issue are different.
Unfortunately, ufw is a bit too simplistic to be compatible with Docker. It assumes all traffic is addressed to the host, whereas containers are really independent routed endpoints.
iptables, and the underlying kernel subsystem, distinguishes between these two types of destination: current host (eg. filter/INPUT
chain) vs routed traffic (eg. filter/FORWARD
). OTOH ufw only supports rules for the current host (ie. in the filter/INPUT
chain), but no packets addressed to containers will go through that chain. Hence, 'Docker bypasses ufw rules'.
We won't implement the IPVS port mapper I once proposed, as it'd be too much work with uncertain outcomes. However, we recently talked about introducing a 'new' port-mapper that would be based solely on docker-proxy
(instead of using it in combination of iptables
). This proxy would run on the host as a regular process, so packets addressed to published ports would go through filter/INPUT
-- ufw rules would be applied properly. However, we'd still need to masquerade traffic coming out of containers, so ufw rules would not apply. That's probably not a big issue for most users, but for some it'd mean 'Docker bypasses ufw' again.
We can't solve this issue on our side only. I get that for ufw, simplicity is one of its core value, and running it on a router is probably out-of-scope, but at the end of the day a docker host really is a router. We'd need ufw maintainer(s) to support that use-case. IIRC there's an open issue on Ubuntu bug tracker for that, with a low number of comments. I encourage you to ask there.
@akerouanton
Unfortunately, ufw is a bit too simplistic to be compatible with Docker.
That is sad. For a long time I thought Docker was the epitome of simplicity and ease of use. When I found out on Monday/Tuesday that something as mundane as the interaction between two applications, which is what simplicity is all about, didn't work, I was very disappointed.
IIRC there's an open issue on Ubuntu bug tracker for that, with a low number of comments. I encourage you to ask there.
How long should I let my Docker reverse proxy bombard on my basic VPS without a UFW rate limit? It's been dying every few minutes for a week. It will probably be another five years before there is a user-friendly, simple solution between UFW and Docker.
I'm going to remove Docker from my stack again, that's my solution, and unfortunately it has lost the simplicity argument for me, so it won't be back on my stack any time soon.
Thanks for the answer though, even if it doesn't help anyone.
@Irotermund If you bind your containers to localhost, you could rate limit your reverse proxy as much as you like. You just need to constrain Docker container connections to the local system upon which they're hosted (bind to localhost), then configure the rest of your proxies, firewalls and networking appropriately.
Expected behavior
Hi all!
ufw in ubuntu should be treated as the "master" when it comes to low level firewall rules (like firewalld in rhel). However docker bypasses ufw completely and does it's own thing with iptables. It was only by chance (luckily!) we discovered this. Example:
ufw deny 8080 (blocks all external access to port 8080) docker run jboss/keycloak
Expected behaviour: the Keycloak container should be available at port 8080 on localhost/127.0.0.1, but not from the outside world.
Actual behavior
UFW reports port 8080 as blocked but the keycloak docker container is still accessible externally on port 8080.
There is a workaround (https://www.techrepublic.com/article/how-to-fix-the-docker-and-ufw-security-flaw/) however I think techrepublic are correct when then describe it as a "security flaw", and it's a pretty serious one. Most people using ubuntu user ufw. I imagine a large number of them are unaware their UFW rules are being bypassed and all their containers are exposed.
Is this something that can be addressed in the next update? That article was published in Jan 2018.