robbertkl / docker-ipv6nat

Extend Docker with IPv6 NAT, similar to IPv4
MIT License
661 stars 48 forks source link

NAT does not work for incoming connections. #14

Closed rmoriz closed 7 years ago

rmoriz commented 7 years ago

Scenario

Debian 8 Docker version 17.05.0-ce, build 89658be

docker.service:

ExecStart=/usr/bin/dockerd -H fd:// --storage-driver=overlay2 --experimental --live-restore

Steps

  1. deployed ipv6nat container:

Privileged, IPv6 enabled, host net, module+ docker socket mounted:

```json [ { "Id": "854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676", "Created": "2017-07-21T10:19:17.394043216Z", "Path": "/docker-ipv6nat", "Args": [ "--retry" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 13753, "ExitCode": 0, "Error": "", "StartedAt": "2017-07-21T10:19:17.718707332Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:24c47013b0c763ab748c7e7fcdc0656ff8a603c8ae6d72183f1e17ae52deb0d8", "ResolvConfPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/resolv.conf", "HostnamePath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/hostname", "HostsPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/hosts", "LogPath": "/srv/docker/containers/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676/854b19ba0f1df3318b72e068a39c640de98c627970ff1354a3ae48462e89a676-json.log", "Name": "/ipv6nat", "RestartCount": 0, "Driver": "overlay2", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ "/var/run/docker.sock:/var/run/docker.sock:ro", "/lib/modules:/lib/modules:ro" ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "host", "PortBindings": {}, "RestartPolicy": { "Name": "always", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": null, "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": true, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": [ "label=disable" ], "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 0, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 0, "MemorySwappiness": 0, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0 }, "GraphDriver": { "Data": { "LowerDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783-init/diff:/srv/docker/overlay2/f127866263e2029eaac0e9b355091084bd462be474b434ffbe681c153f7314e5/diff:/srv/docker/overlay2/eb38d2362b9668c267a56e9b66ed9926acd10196fa20c24892c1f9a9e730310a/diff:/srv/docker/overlay2/2a29b881a2dba9223e04f1293abe3013e4eb5ad6186471c5107ae864b9232191/diff:/srv/docker/overlay2/5aa2c96976c412b28ba46dbd24556899ffe9383c394f4940d2049df812560deb/diff", "MergedDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/merged", "UpperDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/diff", "WorkDir": "/srv/docker/overlay2/fa7d03291f8dbc77c3a550ebf6a2202629e0a9b21fb0a550da38263c2e16f783/work" }, "Name": "overlay2" }, "Mounts": [ { "Type": "bind", "Source": "/var/run/docker.sock", "Destination": "/var/run/docker.sock", "Mode": "ro", "RW": false, "Propagation": "" }, { "Type": "bind", "Source": "/lib/modules", "Destination": "/lib/modules", "Mode": "ro", "RW": false, "Propagation": "" } ], "Config": { "Hostname": "chef01", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "DOCKER_IPV6NAT_VERSION=v0.2.4" ], "Cmd": [ "--retry" ], "ArgsEscaped": true, "Image": "robbertkl/ipv6nat:latest", "Volumes": null, "WorkingDir": "", "Entrypoint": [ "/docker-ipv6nat" ], "OnBuild": null, "Labels": {} }, "NetworkSettings": { "Bridge": "", "SandboxID": "a7374238868989a41a72086b26aa3ef978fd7da1b25290707b421dbe9552846a", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": {}, "Links": null, "Aliases": [], "NetworkID": "730ae4f6e4ec43bc1e6f39965deb7eabead6e6772b51c2ff625898b61b634cc4", "EndpointID": "b3e421e9b395e48f5ec311a4b9ff20c9609d9481c4397f04dfc90d3267152222", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "" } } } } ] ```
  1. created and internal net with IPv6 and ULA range

(container appears after step 3)

```json [ { "Name": "corp-net", "Id": "b026b9fadf56848e67421503bdad88056acba1327ab4990a2129be52a69cdd75", "Created": "2017-07-21T10:19:18.284990364Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": true, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" }, { "Subnet": "fd00:dead:beef::/48", "Gateway": "fd00:dead:beef::1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "Containers": { ... "933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725": { "Name": "corp-chef-nginx", "EndpointID": "41d772b5a903de156d334efa70b1d73918e832e56bcdd7961e0c83f8be71c756", "MacAddress": "02:42:ac:12:00:07", "IPv4Address": "172.18.0.7/16", "IPv6Address": "fd00:dead:beef::7/48" }, ... }, "Options": {}, "Labels": {} } ] ```
  1. launch container
```json [ { "Id": "933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725", "Created": "2017-07-21T11:14:45.774908667Z", "Path": "nginx", "Args": [ "-g", "daemon off;" ], "State": { "Status": "running", "Running": true, "Paused": false, "Restarting": false, "OOMKilled": false, "Dead": false, "Pid": 19472, "ExitCode": 0, "Error": "", "StartedAt": "2017-07-21T11:14:46.831241413Z", "FinishedAt": "0001-01-01T00:00:00Z" }, "Image": "sha256:c9deecae67990851544e03d1403649d123922b4a13c6380b08d6e189b18994d8", "ResolvConfPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/resolv.conf", "HostnamePath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/hostname", "HostsPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/hosts", "LogPath": "/srv/docker/containers/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725/933ea8c487ca213c8d9a5f7a4a7e0904482f5688e178a03c91177415dd6a8725-json.log", "Name": "/corp-chef-nginx", "RestartCount": 0, "Driver": "overlay2", "MountLabel": "", "ProcessLabel": "", "AppArmorProfile": "", "ExecIDs": null, "HostConfig": { "Binds": [ ... ], "ContainerIDFile": "", "LogConfig": { "Type": "json-file", "Config": {} }, "NetworkMode": "chef-server", "PortBindings": { "8080/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "80" } ], "8443/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "443" } ] }, "RestartPolicy": { "Name": "unless-stopped", "MaximumRetryCount": 0 }, "AutoRemove": false, "VolumeDriver": "", "VolumesFrom": null, "CapAdd": null, "CapDrop": null, "Dns": [], "DnsOptions": null, "DnsSearch": [], "ExtraHosts": null, "GroupAdd": null, "IpcMode": "", "Cgroup": "", "Links": null, "OomScoreAdj": 0, "PidMode": "", "Privileged": false, "PublishAllPorts": false, "ReadonlyRootfs": false, "SecurityOpt": null, "UTSMode": "", "UsernsMode": "", "ShmSize": 67108864, "Runtime": "runc", "ConsoleSize": [ 0, 0 ], "Isolation": "", "CpuShares": 0, "Memory": 134217728, "NanoCpus": 0, "CgroupParent": "", "BlkioWeight": 0, "BlkioWeightDevice": null, "BlkioDeviceReadBps": null, "BlkioDeviceWriteBps": null, "BlkioDeviceReadIOps": null, "BlkioDeviceWriteIOps": null, "CpuPeriod": 0, "CpuQuota": 0, "CpuRealtimePeriod": 0, "CpuRealtimeRuntime": 0, "CpusetCpus": "", "CpusetMems": "", "Devices": [], "DeviceCgroupRules": null, "DiskQuota": 0, "KernelMemory": 0, "MemoryReservation": 0, "MemorySwap": 268435456, "MemorySwappiness": 0, "OomKillDisable": false, "PidsLimit": 0, "Ulimits": null, "CpuCount": 0, "CpuPercent": 0, "IOMaximumIOps": 0, "IOMaximumBandwidth": 0 }, "GraphDriver": { "Data": { "LowerDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8-init/diff:/srv/docker/overlay2/e0080e5dfea5a3a8cdd18ac1123a690375d246f7a4e0a51b259cc1b076bedb7f/diff:/srv/docker/overlay2/5a470b81dfce3d10be43543f8dc2cbf25e878e1e2054cf7da8ca43c49e9359c0/diff:/srv/docker/overlay2/9676600d6022a3fdff09d47865bcc67e2ea6e867c4aac4624230dfd5ca995c29/diff:/srv/docker/overlay2/6dbeef38558bab5665a737469664ad3b6c3ca664a312de228ba7128b8e72cc9c/diff:/srv/docker/overlay2/e04103d14cf427b7e7cf247ca8a6527bb61d3786bfece1d5f83287c9a7060f70/diff:/srv/docker/overlay2/926612703de4a445fb7d5e10d58fecbafb685cb65a6a19cbd9b6d6dbaf23375a/diff:/srv/docker/overlay2/f51050f91076ea622a25da6eb9e5b68d243d5114851368812e92d6c4da633983/diff:/srv/docker/overlay2/f7ce377ed0931dbf790acc2fd547adc913c298504d41f13735b1bf139fa7fdf8/diff:/srv/docker/overlay2/fca69021fe3bf2cb1e1f8188ebe8515a3a73cf524384eb0281271724287ef41e/diff:/srv/docker/overlay2/dc280e9215f01253ccd7aa4f4082b1d6a87b6ca0acc0679ba4332a151a9fbd07/diff:/srv/docker/overlay2/37e6827a37c0909bffbc2c684e4b2e60601d851ec82e174b030bbdc13bf25be3/diff:/srv/docker/overlay2/257902a0f76eca3bf9a80141825d1947fb2223ba88d637c76c7be797d3b53a6b/diff:/srv/docker/overlay2/da6cd3ba41a2b0ae93622daa930ed3714dd656ed3fb71dc30eea34e427541fab/diff:/srv/docker/overlay2/5fa8b42cb1d3f60cf044b78bd0ac3ee22bb93b94b86ccc89c697e336e66760dd/diff", "MergedDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/merged", "UpperDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/diff", "WorkDir": "/srv/docker/overlay2/82c4a39e99a8e8667dd3a8bd9baf2126c5d9d84ae982dfe18b645f18daf5bee8/work" }, "Name": "overlay2" }, "Mounts": [ ... ], "Config": { "Hostname": "chef01", "Domainname": "", "User": "", "AttachStdin": false, "AttachStdout": false, "AttachStderr": false, "ExposedPorts": { "80/tcp": {}, "8080/tcp": {}, "8443/tcp": {} }, "Tty": false, "OpenStdin": false, "StdinOnce": false, "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "NGINX_VERSION=1.12.1" ], "Cmd": [ "nginx", "-g", "daemon off;" ], "ArgsEscaped": true, "Image": "corp-chef-nginx:latest", "Volumes": null, "WorkingDir": "", "Entrypoint": null, "OnBuild": null, "Labels": {}, "StopSignal": "SIGTERM" }, "NetworkSettings": { "Bridge": "", "SandboxID": "f95afbc662eaa24a0fabe4ceb7c28ea8401604916c6449b9f1fd088a09aae459", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "80/tcp": null, "8080/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "80" } ], "8443/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "443" } ] }, "SandboxKey": "/var/run/docker/netns/f95afbc662ea", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "corp-net": { "IPAMConfig": {}, "Links": null, "Aliases": [ "933ea8c487ca" ], "NetworkID": "b026b9fadf56848e67421503bdad88056acba1327ab4990a2129be52a69cdd75", "EndpointID": "41d772b5a903de156d334efa70b1d73918e832e56bcdd7961e0c83f8be71c756", "Gateway": "172.18.0.1", "IPAddress": "172.18.0.7", "IPPrefixLen": 16, "IPv6Gateway": "fd00:dead:beef::1", "GlobalIPv6Address": "fd00:dead:beef::7", "GlobalIPv6PrefixLen": 48, "MacAddress": "02:42:ac:12:00:07" } } } } ] ```

As you can see the container is in the IPv6-enabed network. However the ports are not reachable.

ipv6tables -L on the host:

``` ip6tables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER-ISOLATION all anywhere anywhere DOCKER all anywhere anywhere ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all anywhere anywhere ACCEPT all anywhere anywhere DOCKER all anywhere anywhere ACCEPT all anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all anywhere anywhere ACCEPT all anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (2 references) target prot opt source destination Chain DOCKER-ISOLATION (1 references) target prot opt source destination DROP all anywhere anywhere DROP all anywhere anywhere RETURN all anywhere anywhere ```

curl -6 requests to the nginx container still come through docker's IPv4 NAT:

172.18.0.1 - - [21/Jul/2017:11:56:02 +0000] "GET / HTTP/1.1" 200 2490 "-" "curl/7.51.0" "-"
robbertkl commented 7 years ago

Thank you for your detailed report.

First of all, I see some duplicated rules here. Did you perhaps docker kill the ipv6nat container and then start it again? Can you flush all rules and restart ipv6nat?

Here's my ip6tables -L to compare:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-ISOLATION  all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp      anywhere             fd00:dead:beef::XXXX  tcp dpt:XXXX
ACCEPT     tcp      anywhere             fd00:dead:beef::XXXX  tcp dpt:XXXX
ACCEPT     tcp      anywhere             fd00:dead:beef::XXXX  tcp dpt:XXXX
ACCEPT     tcp      anywhere             fd00:dead:beef::XXXX  tcp dpt:XXXX
[ ... ]

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all      anywhere             anywhere            

Aside from the duplicated rules, the only differences seem to be:

Let me look into / think about why you could be missing those rules. I'm running 17.03.1-ce, maybe something has changed. Did you have the same issue with other hosts or versions, or is this your first time running ipv6nat?

robbertkl commented 7 years ago

Also, after a flush + restart, can you send me the output of ip6tables-save instead? Thanks.

rmoriz commented 7 years ago

Duplicate rules reappear on container restart:


root@host01:~# ip6tables -F
root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (0 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (0 references)
target     prot opt source               destination         
root@host01:~# docker restart ipv6nat
ipv6nat
root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-ISOLATION  all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (2 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
DROP       all      anywhere             anywhere            
DROP       all      anywhere             anywhere            
RETURN     all      anywhere             anywhere    

(first time user, no experience with other docker versions)

robbertkl commented 7 years ago

I just realised that's normal, they're for different interfaces.

Send me the ip6tables-save output and we can see this.

rmoriz commented 7 years ago

I've reset everything, removed all containers, networks, rebootet and recreated everything.

root@host01:~# ip6tables-save
# Generated by ip6tables-save v1.4.21 on Fri Jul 21 12:44:09 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [6:519]
:POSTROUTING ACCEPT [6:519]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d ::1/128 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s fd00:dead:beef::/48 ! -o br-692577c71c23 -j MASQUERADE
-A DOCKER -i br-692577c71c23 -j RETURN
COMMIT
# Completed on Fri Jul 21 12:44:09 2017
# Generated by ip6tables-save v1.4.21 on Fri Jul 21 12:44:09 2017
*filter
:INPUT ACCEPT [43:16284]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [45:6368]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o br-692577c71c23 -j DOCKER
-A FORWARD -o br-692577c71c23 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br-692577c71c23 ! -o br-692577c71c23 -j ACCEPT
-A FORWARD -i br-692577c71c23 -o br-692577c71c23 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT
# Completed on Fri Jul 21 12:44:09 2017

and:

root@host01:~# ip6tables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-ISOLATION  all      anywhere             anywhere            
DOCKER     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all      anywhere             anywhere            
ACCEPT     all      anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination         
RETURN     all      anywhere             anywhere    
robbertkl commented 7 years ago

In your container inspect I see various /srv/docker paths. Are you sure your docker socket is at /var/run/docker.sock? Can you send ls -l /var/run/docker.sock?

rmoriz commented 7 years ago
root@host01:~# ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jul 21 12:43 /var/run/docker.sock

root@host01:~# DOCKER_HOST=unix:///var/run/docker.sock docker info | head 
Containers: 9
 Running: 9
 Paused: 0
 Stopped: 0
Images: 62
Server Version: 17.05.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true

I've just linked /var/lib/docker to /srv/docker because of volume size reasons

robbertkl commented 7 years ago

Do you have the docker commands (docker network create of the network(s), docker run of both the ipv6nat and other container) for me, so I can try to reproduce it (on debian + 17.05.0-ce) and try to debug why it's not seeing the containers / exposed ports? Thanks.

rmoriz commented 7 years ago

I'm using chef and this cookbook (connects to dockerd using the docker-api rubygem) so I cannot provide the commands right now, sorry.

btw I tried running the docker-ipv6nat binary outside of docker but doesn't change anything to the ip6tables… also no log/stdout/stderr output at all.

robbertkl commented 7 years ago

OK, thanks. Probably something simple, not sure it's related to the docker version, but obviously it sets up ip6tables correctly, but then fails to detect the containers / exposed ports, so it creates no rules for those ports.

I'll look into it and let you know. Would be nice to get to the bottom of this. Thanks for all the detailed info so far.

robbertkl commented 7 years ago

Unfortunately I was not able to reproduce the problem by just creating the containers. What I did is the following:

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d ::1/128 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s fd00:dead:beef::/48 ! -o mynetwork -j MASQUERADE
-A DOCKER -i mynetwork -j RETURN
COMMIT

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o mynetwork -j DOCKER
-A FORWARD -o mynetwork -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i mynetwork ! -o mynetwork -j ACCEPT
-A FORWARD -i mynetwork -o mynetwork -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
COMMIT

(so basically the same as yours)

``` *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] :DOCKER - [0:0] -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER -A OUTPUT ! -d ::1/128 -m addrtype --dst-type LOCAL -j DOCKER -A POSTROUTING -s fd00:dead:beef::/48 ! -o mynetwork -j MASQUERADE -A POSTROUTING -s fd00:dead:beef::2/128 -d fd00:dead:beef::2/128 -p tcp -m tcp --dport 80 -j MASQUERADE -A DOCKER -i mynetwork -j RETURN -A DOCKER ! -i mynetwork -p tcp -m tcp --dport 80 -j DNAT --to-destination [fd00:dead:beef::2]:80 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :DOCKER - [0:0] :DOCKER-ISOLATION - [0:0] -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o mynetwork -j DOCKER -A FORWARD -o mynetwork -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i mynetwork ! -o mynetwork -j ACCEPT -A FORWARD -i mynetwork -o mynetwork -j ACCEPT -A DOCKER -d fd00:dead:beef::2/128 ! -i mynetwork -o mynetwork -p tcp -m tcp --dport 80 -j ACCEPT -A DOCKER-ISOLATION -j RETURN COMMIT ```

So here the 3 rules were created by ipv6nat for the exposed port: 2 in the nat table and 1 in the filter table. All is good.

So it seems the problem is not with Debian 8 or Docker 17.05.0-ce. Could you try the above docker commands on your machine (manually, without chef) to see if it works? Then we know it's somewhere in the way chef creates everything.

robbertkl commented 7 years ago

Nevermind @moriz, I found it! Your container is started with -p 0.0.0.0:80:80 instead of -p 80:80. This is a "feature" of docker-ipv6nat: it sees it's binding to a specific IPv4 address (or in this case, any IPv4 address), so it refrains from binding to IPv6.

Are you able to change this behaviour for your setup?

I have considered changing ipv6nat so that 0.0.0.0 will be a special case and bind to any IPv6 address as well (as if it was left out). Especially since for plain Docker, -p 80:80 actually means the same as -p 0.0.0.0:80:80.

robbertkl commented 7 years ago

I went ahead and changed this right away. This would make it easier for you and any other people running into this issue.

Just upgrade to v0.3.0 and you should be good to go!

(Closing this issue now, feel free to reopen or open a new one if you're still having issues)

rmoriz commented 7 years ago

Thanks a lot!

I didn't specify 0.0.0.0 myself but sadly https://github.com/chef-cookbooks/docker/blob/master/libraries/helpers_container.rb#L160 adds 0.0.0.0 :/

robbertkl commented 7 years ago

Yeah, that's what I figured. That's why I changed it in docker-ipv6nat, so you wouldn't have to change the cookbook.

rmoriz commented 7 years ago

I was able to abuse the cookbook by using something like:

  port [
    ':80:8080',
    ':443:8443',
  ]

which even works in the previous version of ipv6nat. Interestingly enough, docker ps still shows 80/tcp, 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp but the inspect exposes the diff:


[
    {
    "HostConfig": {
            "PortBindings": {
                "8080/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "80"
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "443"
                    }
                ]
            }
        },
        "NetworkSettings": {
            "Ports": {
                "80/tcp": null,
                "8080/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "80"
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "443"
                    }
                ]
            }
        }
    }
]

odd :/

Thanks a lot again, it works now like a charm :)