Open binaryfire opened 5 years ago
The problem is ufw does it's own thing here. The best thing to do here would be to insert a jump rule into the DOCKER-USER
chain which will forward to the ufw chain.
There is a pretty lengthy discussion on this in github.com/moby/moby, though (search is failing me, unfortunately).
Note that in your example, docker is not doing anything with iptables OR networking since it's using --net=host
.
@cpuguy83 Thanks for the quick reply.
With --net=host specified docker (latest version) is still opening the port via iptables, at least on my ubuntu 18.04 fresh install. If that's not supposed to be happening, maybe it's a bug? I agree it definitely shouldn't be doing anything with iptables or networking if --net=host is specified.
I'll see if I can find the thread you mentioned. Perhaps the docker install process could automatically add DOCKER_OPTS="--iptables=false" if ufw is enabled?
I just came across the same article myself, and I am very surprised by this behaviour. I dont know all the details about Linux networking, but is there any reason to be doing it? I never heard of any other program that goes around the firewall. If this is some kind of feature, it should definitely disabled by default because it opens up the entire server.
@Nutomic I agree. I don't think looking at this as a "UFW problem" is the right approach. The way UFW manages the firewall is quite elegant, which is why the majority of Ubuntu users are using it over firewalld.
I really think the docker devs need to add UFW compatibility ASAP as it's a serious security issue. Or include a clear warning on install letting users know their UFW rules will be ignored and instructions on a workaround.
@Nutomic ...which is why the majority of Ubuntu users are using it...
+100 All of us are using ufw.
This is an exploit waiting to be exploited.
This should be considered a security risk. Why is it not given priority?
Does someone know how to report security risks?
In other projects there is usually an email address (or some other mechanism) to report exploits directly to management.
If a server is compromised and docker can be blamed somehow, it would be a major PR headache (at best) for docker management, so I'm sure they'd want to know about this immediately. And if it cannot be fixed, I'm also sure they'd quickly update the docs with major warnings about this problem to let users know about it and legally pass the blame onto us. We should not discover this problem by accident, it should be made clear in the docs - and how best to work around it.
There is a SECURITY.md in the moby repo explaining it. This is not an exploit, however it is easy to misconfigure.
I don't think the desired outcome should be to default to iptables=false but rather have a way to have docker insert its jump rule into ufw.
@cpuguy83 IMHO the fact that it's so easy to misconfigure makes it a pretty serious security risk. Other than third party firewall software, I don't know of any other packages that bypass UFW rules. It's such a low-level OS function that most users, even advanced ones, wouldn't think to check.
Definitely agree re: implementation though. Getting Docker to insert its jump rule into UFW would be great.
Thanks to @cpuguy83, it's:
security @ docker . com
I recommend we all send a message there.
I just had a look at the multiple UFW-related issues for moby and they've all been closed... :/ Does anyone know if podman has the same problem? If not, might be a viable alternative
That is very surprising! More reason for us to message the security team. This is crazy - how many users don't know they have gaping holes in their security because of this???
For anyone arriving here from google in the future, please do two things:
security @ docker . com
btw, I'm think all that's needed is to insert a jump rule from DOCKER-USER
to ufw-user-forward
e.g.
sudo iptables -A DOCKER-USER -j ufw-user-forward
Adding some info per @menathor request
See:
Most of these recommend disabling iptables manipulation with --iptables=false
and manually configuring the rules as necessary. (i.e. add the following to /etc/ufw/before.rules
)
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING ! -o docker0 -s 172.16.0.0/12 -j MASQUERADE
COMMIT
More recently, two other workarounds have surfaced which do not use this flag and seem to be more robust:
Someone from the docker team please tell us the official and secure way to deal with this problem?
I don't want to become an iptables expert just so I can use docker.
@lonix1 Did you try sudo iptables -A DOCKER-USER -j ufw-user-forward
?
There are about a dozen approaches starting from 2013/2014, and changing with each major version of docker. At this point I have no understanding which to use, and why. I know ufw well, but not iptables.
I'm hesitant to use the mindless copy-paste approach as I'm afraid to blow up my servers's security. That's why I would be grateful for official guidance, and explanation.
The code you posted seems good, but I have no idea what it does 😃 Would you mind telling us in your opinion which is the best way (I assume it's what you posted above), and why/how it works?
(The simplest approach I've found is not to change anything, but use 127.0.0.1:8080:80
and expose the 8080
via nginx, but even then I'm not sure if that's the best approach, though it seems "cleanest".)
Thanks for helping us out!
TMK nothing has changed in a very long time. I do not expect the default handling of this to change either as it effects many many users.
Docker creates an iptables chain called DOCKER-USER
, this is where users can add their own filtering logic, this gets run before port forwarding rules.
The above command puts a jump rule fro, DOCKER-USER
to ufw-user-input
, which means anything that hits DOCKER-USER
(which should be anything Docker related) will get passed along to the ufw-user-forward
chain which is where ufw rules should go.
@cpuguy83 So if I understand correctly, if I implement your approach, thereafter I can continue to use ufw to open/close ports, and never need to mess around with iptables at all? That sounds perfect.
On another note, since you are obviously an expert in this matter, how do you feel this approach compares to the one I posted above (127.0.0.1:8080:80
+ nginx) - do you feel one is better/safer/ whatever than the other?
It's always best to be explicit about what you want... e.g. if you don't want nginx to be available on all interfaces then specify the interface (such as 127.0.0.1). The other thing you can do is change the default bind address to 127.0.0.1 in the daemon config, then you need to be explicit about what should have public access rather than what should be private.
On Mon, Sep 9, 2019 at 10:17 AM lonix1 notifications@github.com wrote:
@cpuguy83 So if I understand correctly, if I implement your approach, thereafter I can continue to use ufw to open/close ports, and never need to mess around with iptables at all? That sounds perfect.
On another note, since you are obviously an expert in this matter, how do you feel this approach compares to the one I posted above (127.0.0.1:8080:80 + nginx) - do you feel one is better/safer/ whatever than the other?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
--
I do not expect the default handling of this to change either as it effects many many users.
@cpuguy83 I think its less about a change in behavior and more about a documented best practice to make things work securely.
I say this realizing that you can be explicit about which interface/ip to listen on. But if the expected behavior of UFW is that it blocks all incoming traffic except where specified, then to avoid mistakes, it should do so for Docker as well. Even if it takes some extra configuration.
Also your jump rule sudo iptables -A DOCKER-USER -j ufw-user-forward
doesn't work. The first rule in the DOCKER-USER
chain is a RETURN
(at least in my setup), so it never hits the appended rule. Changing to an insert (sudo iptables -I DOCKER-USER -j ufw-user-forward
) fixes that problem but still doesn't seem to work.
@kaysond What workaround are you using at the moment?
EDIT: just fixed my docker / ufw example - the first one didn't make any sense. It's late over here :P
💯 x this:
I think its less about a change in behavior and more about a documented best practice to make things work securely.
Any solution that requires sysadmin knowledge / reconfiguration every time new ports are involved isn't a viable solution. The whole idea behind ufw and docker is ease of use. So taking a step back and breaking it down into super simplistic (i.e. developer) terms:
docker run -p 9000:9000 imnotasysadmin
makes imnotasysadmin available on port 9000
I then proxy port 9000 to be behind an oauth gateway on port 443
ufw deny all
then ufw allow from 192.168.1.2 to any port 443
means imnotasysadmin should not be accessible on port 9000, and should only be able accessible from 192.168.1.2 via the oauth proxy on 443.
The only reasonable solutions for users like me are:
I can't speak for all Ubuntu's users, but I'm a developer, not a sysadmin. I have reasonable linux skills and could implement either of the above solutions no probs. But I no longer feel comfortable using docker on ubuntu in production if my network security is based on hacks from third party sites that may or may not work in certain situations, or may break with future updates.
How can I run things like keycloak, rundeck or other sensitive apps that I'm proxying to 443 and putting behind an oauth proxy if there's any chance docker is going to completely ignore my deny rules and happily expose the original port?
Sorry if this is sounds a bit harsh and peace and love to everyone here, but this has already turned into another "try workaround x" and "workaround x doesn't work when x+y=z" thread. Every one of those on the moby repo have been closed, dating back years.
Developers using Ubuntu (the linux distro with the biggest market share) need an official, supported and documented way of setting docker up so that we can happily docker run
and ufw allow
/ ufw deny
all day and everything works as expected.
Or alternatively, official clarification from Docker that "docker does not work with Ubuntu's default firewall". That would be better than silence.
TLDR: https://tenor.com/view/developers-gif-13292051
☮️❤️
@menathor I'm in the same boat as you exactly. And I'm also wary of the copy-pasta approach.
So I decided to dump ufw and use iptables. Which annoys me, as I have to waste time learning it.
I'm still going through various tutorials. But what surprises me so far, is despite the widely-held belief that iptables is "low-level" and complicated.... it's not. It's simple.
I assume there are edge cases though that you wouldn't typically think of - icmp, specific ports, etc. But if you use a whitelist (rather than blacklist) then at the very worst, you'll make a mistake that'll lock you out of your own server. That's not a problem as you can log in via your hoster's web-based console and fix it, and importantly - the bad guys won't be able to get in either. This is the one and only downside to using iptables, and it's manageable.
Here are some tutorials I'm going through:
More theoretical:
I'll post my final config when I'm done in a few days, if others do the same then we can compare.
@kaysond What workaround are you using at the moment?
@lonix1 Right now, nothing. But I'm working on a larger scale deployment that needs a solution. For now I would suggest reading this: https://github.com/chaifeng/ufw-docker
It has a very descriptive readme, and once you have an idea of what its doing, I'd just run the script. This is a temporary solution until the Docker team comes up with something.
This is a temporary solution until the Docker team comes up with something
I would say that is the "fix".
At best we could add a flag to check ufw before running docker rules, but it's the same outcome: an iptables rule that jumps chains.
This is the purpose of the DOCKER-USER
chain.
I would say that is the "fix". At best we could add a flag to check ufw before running docker rules, but it's the same outcome: an iptables rule that jumps chains. This is the purpose of the
DOCKER-USER
chain.
Well, yes. I think its reasonable to have a set of iptables rules be the "fix," and its easy enough to implement via UFW's config files. But what the community is looking for is something that has been thoughtfully considered by the Docker team and published in the documentation.
For example, the script I linked will allow all traffic from RFC 1918 ranges to reach the Docker network. This is not how UFW behaves by design. If I turn on a service on my Ubuntu host, but don't explicitly allow it in UFW, not even local traffic should reach it.
And maybe that difference is fine. But I, and probably most others, would be more comfortable taking the recommendation from the Docker documentation, and not some guy's github.
Bump. Anyone have any more input? Maybe we need to open an issue on the documentation repo...
Hey @kaysond Sorry I forgot about this thread. I've been using iptables for a week now and couldn't be happier. For anyone who arrives here, these two links will help you integrate iptables and docker very simply:
If you still want to integrate ufw and docker, then I think what @cpuguy83 wrote above is the way to go (or a variation on that idea). Any rules in the DOCKER-USER
chain will be run before docker's own rules, so if there's a jump from there to one of ufw's chains then it MUST work. The question is what order to use, and which one of ufw's many chains to jump to. You'll need to experiment.
Of course the ideal is for the docker team to provide clear guidance of a tested/supported rule, because this isn't something most ufw users know how to do.
I have a few months since I learned how to deploy services with docker
(as I usually use LXD
, I didn't have this security issue before). We deployed an Elastic Search
database (docker
) service in a customer's server, which was intended to be accessed only by another server in the local network. The server in which ES
was installed had only SSH
port exposed (using UFW
) to the outside for maintenance. After few days the ES
data was gone and instead a domain was showing in the data (good that was only testing data). It was mind blowing!. We thought the server was hacked. We spend so much time looking for clues which lead to nothing. Finally we tested accessing the service directly with the global IP address and that is how I ended here.
We have tried most of the recommendations without success (either everyone have access or no one). Some cases seemed to work, but after rebooting, things went back to the same. Dropping UFW
and use only iptables
will take time as neither of us is confident with it. So I came with another alternative using one of my favorite tools: rinetd
(a TCP/UDP port redirector):
This method doesn't require any changes in iptables
or ufw
. You need to bind the ports in your containers to 127.0.0.1
(instead of 0.0.0.0
or empty). If you can't remove and re-run the container, then stop docker
service, modify the hostconfig.json
file (which is under /var/lib/docker/containers/<HASH>/
) and replace/add the value of HostIp
, just after PortBindings
to 127.0.0.1
). Start docker
again. With that change, the service must be only accessible from localhost.
Then, install rinetd
and add this rule in /etc/rinetd.conf
:
# HOST_IP HOST_PORT FWD_TO_IP FWD_TO_PORT
xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
in which xxx...
is the IP to bind the service (either local network IP or global IP depending on your needs).
After that, UFW
rules will be respected (e.g. ufw allow from yyy.yyy.yyy.yyy to any port 9200 proto tcp
)
It worked for me so far and it works after reboots, so I hope this helps in any way. I'm not sure if it may have side effects, so make your own tests before trusting this method.
I really hope the docker
team can address this situation so no hacks are required to make it safe to use by anyone.
Until a proper fix is implemented, would installing and using firewalld on Ubuntu (instead of UFW) solve this issue?
(As usual, late to the party)
Reiterating @binaryfire's words, is there by now either ...
A search in the Docker documentation only finds me forum posts.
I was looking for simpler possible solution, and found one:
when publishing port, I'm setting localhost as the IP so for example for redis:
-p 127.0.0.1:6379:6379
this way port is accessible only from localhost, and no need to block it through firewall.
hello @piotr-jarosz !
Block all IN/OUT trafic related to public interface from/to docker
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
5107 3477K ACCEPT all -- MY_PUBLIC_INTERFACE * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
113 6720 DROP all -- MY_PUBLIC_INTERFACE * 0.0.0.0/0 0.0.0.0/0
25M 1564M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
The following workaround is a fairly simple one and it works fine for me. I have verified it's working using nmap.
Put a dummy rule for each exposed docker port, or any other port, that you want to secure at the very beginning of the /etc/ufw/before.rules file as shown in the example below. Change "ens160" for your external interface name. The destination can be anything non existent. In this case port 8080 is redirected to port 999 which goes nowhere. The rest of ufw seems to work fine.
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
*nat :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -F -I PREROUTING -i ens160 -p tcp --dport 8080 -j DNAT --to 0.0.0.0:999 COMMIT
ALTERNATIVE SOLUTION:
I realized this issue existed only by chance recently and my first reaction was much the same as expressed here, one of both surprise and disappointment. I'm not too familiar with docker so I don't know much about it, except that it has this problem and therefore it seems I can't trust it on my Ubuntu servers unless the docker daemon is confined.
I'm looking into the following possibility for confinement:
https://ubuntu.com/blog/stephane-graber-lxd-2-0-docker-in-lxd-712
The LXD containers can be managed by OpenNebula and I can create a docker "template" for deployment which makes this a possibly very fast way to deploy many docker daemons on many hosts securely. It's also possible to use MAAS to setup a new LXD host from bare metal. Also the user space confinement is automatic so no need to wade through a load of instructions on how to confine the docker daemon, as it's already well confined to user space within the LXD container.
Does anyone know if switching to firewalld will solve this? I'm starting to think that might be the easiest workaround.
https://github.com/moby/libnetwork/pull/2548 should add native support for the latest firewalld
version soon which adds docker interfaces to a docker zone which accepts traffic and we pass direct iptables rules to firewalld for forwarding, DNAT etc .
We would love to support ufw
but unsure of what kind/how much of plumbing will be needed to support it since we are not very familiar with ufw
From docker side-
as @cpuguy83 suggested, would a simple jump to a known ufw generated iptables
chain suffice - iptables -I DOCKER-USER -j ufw-user-forward
From UFW side-
the DOCKER-USER
chain was created to solve the problem of the user adding their own filtering rules. Can UFW insert iptables
rules directly into this chain ?
Hoping to hear from the ufw
experts/maintainers
This is basically an example of what you're describing, but it would be nice if it were automatic. https://github.com/chaifeng/ufw-docker
I would say that is the "fix". At best we could add a flag to check ufw before running docker rules, but it's the same outcome: an iptables rule that jumps chains. This is the purpose of the
DOCKER-USER
chain.Well, yes. I think its reasonable to have a set of iptables rules be the "fix," and its easy enough to implement via UFW's config files. But what the community is looking for is something that has been thoughtfully considered by the Docker team and published in the documentation.
For example, the script I linked will allow all traffic from RFC 1918 ranges to reach the Docker network. This is not how UFW behaves by design. If I turn on a service on my Ubuntu host, but don't explicitly allow it in UFW, not even local traffic should reach it.
And maybe that difference is fine. But I, and probably most others, would be more comfortable taking the recommendation from the Docker documentation, and not some guy's github.
I just want to add my support to the RFC 1918 comment in relation to https://github.com/chaifeng/ufw-docker . I run pfSense for my network firewall, I run UFW as a host firewall to block from potential lateral hacking. I would suspect many do also. UFW was my goto when looking to harden my Docker VM.
Always scan your stuff. I was not please finding out about things this way, though I'm not comfortable pointing the finger at Docker or UFW it just doesn't seem like it should be this way.
This is how the Ubuntu "do-release-upgrade" distribution upgrade handles it:
Continue running under SSH?
This session appears to be running under ssh. It is not recommended to perform a upgrade over ssh currently because in case of failure it is harder to recover.
If you continue, an additional ssh daemon will be started at port '1022'. Do you want to continue?
Continue [yN] y
Starting additional sshd
To make recovery in case of failure easier, an additional sshd will be started on port '1022'. If anything goes wrong with the running ssh you can still connect to the additional one. If you run a firewall, you may need to temporarily open this port. As this is potentially dangerous it's not done automatically. You can open the port with e.g.: 'iptables -I INPUT -p tcp --dport 1022 -j ACCEPT'
To continue please press [ENTER]
The important part being: As this is potentially dangerous it's not done automatically.
I cannot understand that a) Docker modifies iptables b) silently c) by default d) this open issue is not addressed
I have a few months since I learned how to deploy services with
docker
(as I usually useLXD
, I didn't have this security issue before). We deployed anElastic Search
database (docker
) service in a customer's server, which was intended to be accessed only by another server in the local network. The server in whichES
was installed had onlySSH
port exposed (usingUFW
) to the outside for maintenance. After few days theES
data was gone and instead a domain was showing in the data (good that was only testing data). It was mind blowing!. We thought the server was hacked. We spend so much time looking for clues which lead to nothing. Finally we tested accessing the service directly with the global IP address and that is how I ended here.We have tried most of the recommendations without success (either everyone have access or no one). Some cases seemed to work, but after rebooting, things went back to the same. Dropping
UFW
and use onlyiptables
will take time as neither of us is confident with it. So I came with another alternative using one of my favorite tools:rinetd
(a TCP/UDP port redirector):This method doesn't require any changes in
iptables
orufw
. You need to bind the ports in your containers to127.0.0.1
(instead of0.0.0.0
or empty). If you can't remove and re-run the container, then stopdocker
service, modify thehostconfig.json
file (which is under/var/lib/docker/containers/<HASH>/
) and replace/add the value ofHostIp
, just afterPortBindings
to127.0.0.1
). Startdocker
again. With that change, the service must be only accessible from localhost.Then, install
rinetd
and add this rule in/etc/rinetd.conf
:# HOST_IP HOST_PORT FWD_TO_IP FWD_TO_PORT xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
in which
xxx...
is the IP to bind the service (either local network IP or global IP depending on your needs).After that,
UFW
rules will be respected (e.g.ufw allow from yyy.yyy.yyy.yyy to any port 9200 proto tcp
)It worked for me so far and it works after reboots, so I hope this helps in any way. I'm not sure if it may have side effects, so make your own tests before trusting this method.
I really hope the
docker
team can address this situation so no hacks are required to make it safe to use by anyone.
I had the same idea as you but it didn't work. I will retry that.
I have a few months since I learned how to deploy services with
docker
(as I usually useLXD
, I didn't have this security issue before). We deployed anElastic Search
database (docker
) service in a customer's server, which was intended to be accessed only by another server in the local network. The server in whichES
was installed had onlySSH
port exposed (usingUFW
) to the outside for maintenance. After few days theES
data was gone and instead a domain was showing in the data (good that was only testing data). It was mind blowing!. We thought the server was hacked. We spend so much time looking for clues which lead to nothing. Finally we tested accessing the service directly with the global IP address and that is how I ended here. We have tried most of the recommendations without success (either everyone have access or no one). Some cases seemed to work, but after rebooting, things went back to the same. DroppingUFW
and use onlyiptables
will take time as neither of us is confident with it. So I came with another alternative using one of my favorite tools:rinetd
(a TCP/UDP port redirector): This method doesn't require any changes iniptables
orufw
. You need to bind the ports in your containers to127.0.0.1
(instead of0.0.0.0
or empty). If you can't remove and re-run the container, then stopdocker
service, modify thehostconfig.json
file (which is under/var/lib/docker/containers/<HASH>/
) and replace/add the value ofHostIp
, just afterPortBindings
to127.0.0.1
). Startdocker
again. With that change, the service must be only accessible from localhost. Then, installrinetd
and add this rule in/etc/rinetd.conf
:# HOST_IP HOST_PORT FWD_TO_IP FWD_TO_PORT xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
in which
xxx...
is the IP to bind the service (either local network IP or global IP depending on your needs). After that,UFW
rules will be respected (e.g.ufw allow from yyy.yyy.yyy.yyy to any port 9200 proto tcp
) It worked for me so far and it works after reboots, so I hope this helps in any way. I'm not sure if it may have side effects, so make your own tests before trusting this method. I really hope thedocker
team can address this situation so no hacks are required to make it safe to use by anyone.I had the same idea as you but it didn't work. I will retry that.
I tried it again but it seems UFW is not applied on rinetd too?
xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
when adding the above config to rinetd, port 9200 seems to be open to everyone although there is no allow rule in UFW (everything is rejected by default on my UFW)
Wow! Finally I found this. Since MOTHS I am becoming desperate why the hell all my firewall rules don't apply... I started rewriting fail2ban regex, ufw rules, tried iptables.
Is there any official/best practice/whatever approach to get pass this problem?
You have few options. Disable docker's IPtables function altogether. Or if you're iptables magician you can write your own rules on top of docker's one. Or use ufw-docker repo's script (with optional automation). Also if the container doesn't have to be open to the network, you can bind it on localhost to avoid everything else mentioned above.
Is there any official/best practice/whatever approach to get pass this problem?
Another option, which is what I usually do, is bind exposed container ports to 127.0.0.1 (by default they're bound to any IP address - 0.0.0.0), which makes them inaccessible outside the local system. I then run a reverse proxy and route traffic to the containers in that way.
I'm neither a iptables, nor a network pro. I understand ufw good enough to set it up the right way but yeah. All the approaches by you are fine but I think it's still too much handwork. I mean the main purpose for most people why they use docker is that they want to host services on the web. I am already running an nginx proxy (jwilders proxy-companion) but I couldn't get it to work for now like you explained @Aninstance
I am already running an nginx proxy (jwilders proxy-companion) but I couldn't get it to work for now like you explained
Why not, what's the issue with it? It's essentially just a case of defining your upstream like this example:
upstream my-app {
server 127.0.0.1:8080;
}
Then in your location block you'd define the proxy with proxy_pass http://my-app;
(plus your other proxy related directives, like setting the headers, cache, etc).
Wow that's really upsetting that their is no official guide or what ever from the docker team. After reading through most of the posts (here and on Stack Overflow) for me personally the best method will be to set the default binding IP of docker to 127.0.0.1:
$ sudo nano /etc/docker/daemon.json
{
"ip" : "127.0.0.1"
}
$ sudo service docker restart
And configure UFW only for non docker services.
This way you need to explicite bind to your external IP e.g. 192.168.1.1:8080:80
Still I would rather like docker to honor UFW rules, but with this approach you don't need to mess around with iptable or ufw rules.
Edit:
Oh man this is frustrating, Docker-Compose does not honor the docker daemon.json
settings: https://github.com/docker/compose/issues/2999
So in docker-compose files you still have either to explicitly bind the ports to localhost or set network_mode: default
I have a few months since I learned how to deploy services with
docker
(as I usually useLXD
, I didn't have this security issue before). We deployed anElastic Search
database (docker
) service in a customer's server, which was intended to be accessed only by another server in the local network. The server in whichES
was installed had onlySSH
port exposed (usingUFW
) to the outside for maintenance. After few days theES
data was gone and instead a domain was showing in the data (good that was only testing data). It was mind blowing!. We thought the server was hacked. We spend so much time looking for clues which lead to nothing. Finally we tested accessing the service directly with the global IP address and that is how I ended here. We have tried most of the recommendations without success (either everyone have access or no one). Some cases seemed to work, but after rebooting, things went back to the same. DroppingUFW
and use onlyiptables
will take time as neither of us is confident with it. So I came with another alternative using one of my favorite tools:rinetd
(a TCP/UDP port redirector): This method doesn't require any changes iniptables
orufw
. You need to bind the ports in your containers to127.0.0.1
(instead of0.0.0.0
or empty). If you can't remove and re-run the container, then stopdocker
service, modify thehostconfig.json
file (which is under/var/lib/docker/containers/<HASH>/
) and replace/add the value ofHostIp
, just afterPortBindings
to127.0.0.1
). Startdocker
again. With that change, the service must be only accessible from localhost. Then, installrinetd
and add this rule in/etc/rinetd.conf
:# HOST_IP HOST_PORT FWD_TO_IP FWD_TO_PORT xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
in which
xxx...
is the IP to bind the service (either local network IP or global IP depending on your needs). After that,UFW
rules will be respected (e.g.ufw allow from yyy.yyy.yyy.yyy to any port 9200 proto tcp
) It worked for me so far and it works after reboots, so I hope this helps in any way. I'm not sure if it may have side effects, so make your own tests before trusting this method. I really hope thedocker
team can address this situation so no hacks are required to make it safe to use by anyone.I had the same idea as you but it didn't work. I will retry that.
I tried it again but it seems UFW is not applied on rinetd too?
xxx.xxx.xxx.xxx 9200 127.0.0.1 9200
when adding the above config to rinetd, port 9200 seems to be open to everyone although there is no allow rule in UFW (everything is rejected by default on my UFW)
Why is not working? These are two things you should check:
1) You are not binding your docker containers to "127.0.0.1". For example: docker run -p 127.0.0.1:9200:9200 elastic
. If you are binding to 127.0.0.1 it should never be open to the outside as it is only listening in the local port. So I think you missed this part.
2) If you are sure you did the previous step, check try to flush the existing iptable rules (perhaps some rule was left from your previous attempts).
I have used this method in several severs now and I can tell it has worked 100% of the times (all ports are closed to the outside world unless I specify them in rinetd).
I have a similar problem. Tried some of the ways on google. However, it doesn't work. So I decided to use the cloud firewall service. I think that's a good solution.
However, I still expect this error to be fixed so that the developers can run docker swarm and ufw allow
and ufw deny
work as expected.
Thanks.
The only thing that helped me: https://p1ngouin.com/posts/how-to-manage-iptables-rules-with-ufw-and-docker
Expected behavior
Hi all!
ufw in ubuntu should be treated as the "master" when it comes to low level firewall rules (like firewalld in rhel). However docker bypasses ufw completely and does it's own thing with iptables. It was only by chance (luckily!) we discovered this. Example:
ufw deny 8080 (blocks all external access to port 8080) docker run jboss/keycloak
Expected behaviour: the Keycloak container should be available at port 8080 on localhost/127.0.0.1, but not from the outside world.
Actual behavior
UFW reports port 8080 as blocked but the keycloak docker container is still accessible externally on port 8080.
There is a workaround (https://www.techrepublic.com/article/how-to-fix-the-docker-and-ufw-security-flaw/) however I think techrepublic are correct when then describe it as a "security flaw", and it's a pretty serious one. Most people using ubuntu user ufw. I imagine a large number of them are unaware their UFW rules are being bypassed and all their containers are exposed.
Is this something that can be addressed in the next update? That article was published in Jan 2018.