Closed rmoriz closed 7 years ago
Thank you for your detailed report.
First of all, I see some duplicated rules here. Did you perhaps docker kill
the ipv6nat container and then start it again? Can you flush all rules and restart ipv6nat?
Here's my ip6tables -L
to compare:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all anywhere anywhere
DOCKER all anywhere anywhere
ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp anywhere fd00:dead:beef::XXXX tcp dpt:XXXX
ACCEPT tcp anywhere fd00:dead:beef::XXXX tcp dpt:XXXX
ACCEPT tcp anywhere fd00:dead:beef::XXXX tcp dpt:XXXX
ACCEPT tcp anywhere fd00:dead:beef::XXXX tcp dpt:XXXX
[ ... ]
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all anywhere anywhere
Aside from the duplicated rules, the only differences seem to be:
DOCKER-ISOLATION
contains a DROP
rule (which is an isolation rule, I don't have it because I use my bridge network only for IPv6)DOCKER
chainLet me look into / think about why you could be missing those rules. I'm running 17.03.1-ce, maybe something has changed. Did you have the same issue with other hosts or versions, or is this your first time running ipv6nat?
Also, after a flush + restart, can you send me the output of ip6tables-save
instead? Thanks.
Duplicate rules reappear on container restart:
root@host01:~# ip6tables -F
root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (0 references)
target prot opt source destination
Chain DOCKER-ISOLATION (0 references)
target prot opt source destination
root@host01:~# docker restart ipv6nat
ipv6nat
root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all anywhere anywhere
DOCKER all anywhere anywhere
ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere
DOCKER all anywhere anywhere
ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
DROP all anywhere anywhere
DROP all anywhere anywhere
RETURN all anywhere anywhere
(first time user, no experience with other docker versions)
I just realised that's normal, they're for different interfaces.
Send me the ip6tables-save
output and we can see this.
I've reset everything, removed all containers, networks, rebootet and recreated everything.
root@host01:~# ip6tables-save
# Generated by ip6tables-save v1.4.21 on Fri Jul 21 12:44:09 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [6:519]
:POSTROUTING ACCEPT [6:519]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d ::1/128 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s fd00:dead:beef::/48 ! -o br-692577c71c23 -j MASQUERADE
-A DOCKER -i br-692577c71c23 -j RETURN
COMMIT
# Completed on Fri Jul 21 12:44:09 2017
# Generated by ip6tables-save v1.4.21 on Fri Jul 21 12:44:09 2017
*filter
:INPUT ACCEPT [43:16284]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [45:6368]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o br-692577c71c23 -j DOCKER
-A FORWARD -o br-692577c71c23 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br-692577c71c23 ! -o br-692577c71c23 -j ACCEPT
-A FORWARD -i br-692577c71c23 -o br-692577c71c23 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT
# Completed on Fri Jul 21 12:44:09 2017
and:
root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-ISOLATION all anywhere anywhere
DOCKER all anywhere anywhere
ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all anywhere anywhere
ACCEPT all anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all anywhere anywhere
In your container inspect I see various /srv/docker
paths. Are you sure your docker socket is at /var/run/docker.sock
? Can you send ls -l /var/run/docker.sock
?
root@host01:~# ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jul 21 12:43 /var/run/docker.sock
root@host01:~# DOCKER_HOST=unix:///var/run/docker.sock docker info | head
Containers: 9
Running: 9
Paused: 0
Stopped: 0
Images: 62
Server Version: 17.05.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
I've just linked /var/lib/docker
to /srv/docker
because of volume size reasons
Do you have the docker commands (docker network create
of the network(s), docker run
of both the ipv6nat and other container) for me, so I can try to reproduce it (on debian + 17.05.0-ce) and try to debug why it's not seeing the containers / exposed ports? Thanks.
I'm using chef and this cookbook (connects to dockerd using the docker-api rubygem) so I cannot provide the commands right now, sorry.
btw I tried running the docker-ipv6nat
binary outside of docker but doesn't change anything to the ip6tables… also no log/stdout/stderr output at all.
OK, thanks. Probably something simple, not sure it's related to the docker version, but obviously it sets up ip6tables correctly, but then fails to detect the containers / exposed ports, so it creates no rules for those ports.
I'll look into it and let you know. Would be nice to get to the bottom of this. Thanks for all the detailed info so far.
Unfortunately I was not able to reproduce the problem by just creating the containers. What I did is the following:
sudo docker run -d --name ipv6nat --restart=always -v /var/run/docker.sock:/var/run/docker.sock:ro -v /lib/modules:/lib/modules:ro --privileged --net=host robbertkl/ipv6nat
sudo docker network create --ipv6 --subnet=fd00:dead:beef::/48 -o com.docker.network.bridge.name=mynetwork mynetwork
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d ::1/128 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s fd00:dead:beef::/48 ! -o mynetwork -j MASQUERADE
-A DOCKER -i mynetwork -j RETURN
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o mynetwork -j DOCKER
-A FORWARD -o mynetwork -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i mynetwork ! -o mynetwork -j ACCEPT
-A FORWARD -i mynetwork -o mynetwork -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT
(so basically the same as yours)
sudo docker run -d --name web --network mynetwork -p 80:80 robbertkl/php
So here the 3 rules were created by ipv6nat for the exposed port: 2 in the nat
table and 1 in the filter
table. All is good.
So it seems the problem is not with Debian 8 or Docker 17.05.0-ce. Could you try the above docker commands on your machine (manually, without chef) to see if it works? Then we know it's somewhere in the way chef creates everything.
Nevermind @moriz, I found it! Your container is started with -p 0.0.0.0:80:80
instead of -p 80:80
. This is a "feature" of docker-ipv6nat: it sees it's binding to a specific IPv4 address (or in this case, any IPv4 address), so it refrains from binding to IPv6.
Are you able to change this behaviour for your setup?
I have considered changing ipv6nat so that 0.0.0.0
will be a special case and bind to any IPv6 address as well (as if it was left out). Especially since for plain Docker, -p 80:80
actually means the same as -p 0.0.0.0:80:80
.
I went ahead and changed this right away. This would make it easier for you and any other people running into this issue.
Just upgrade to v0.3.0 and you should be good to go!
(Closing this issue now, feel free to reopen or open a new one if you're still having issues)
Thanks a lot!
I didn't specify 0.0.0.0 myself but sadly https://github.com/chef-cookbooks/docker/blob/master/libraries/helpers_container.rb#L160 adds 0.0.0.0 :/
Yeah, that's what I figured. That's why I changed it in docker-ipv6nat, so you wouldn't have to change the cookbook.
I was able to abuse the cookbook by using something like:
port [
':80:8080',
':443:8443',
]
which even works in the previous version of ipv6nat. Interestingly enough, docker ps
still shows 80/tcp, 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp
but the inspect exposes the diff:
[
{
"HostConfig": {
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": "80"
}
],
"8443/tcp": [
{
"HostIp": "",
"HostPort": "443"
}
]
}
},
"NetworkSettings": {
"Ports": {
"80/tcp": null,
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
}
],
"8443/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "443"
}
]
}
}
}
]
odd :/
Thanks a lot again, it works now like a charm :)
Scenario
Debian 8 Docker version 17.05.0-ce, build 89658be
docker.service:
Steps
Privileged, IPv6 enabled, host net, module+ docker socket mounted:
(container appears after step 3)
As you can see the container is in the IPv6-enabed network. However the ports are not reachable.
ipv6tables -L
on the host:curl -6
requests to the nginx container still come through docker's IPv4 NAT: