Open pertoft opened 4 years ago
However, this solution is very unstable and not suited for production - we have experienced many "hick-ups" where IPv6 connectivity is lost.
I have seen this reported a fair amount. In my own experience this is when the external interface loses it's proxy_ndp
kernel setting. This would happen when a container from Docker was started/stopped which adjusted the network (adding/removing a veth interface).
On my VPS instance triggered a cloud-init
"hotplug" hook (udev rule) that renders the netplan config to a systemd-networkd
interface config file and applies that again (even though no changes were made), resetting the interface settings.
In this case networkd
has a setting that can keep proxy_ndp
enabled, but netplan does not (which creates the config). I needed to add my own hook script (for networkd
this is done with networkd-dispatcher
package, other network managers have similar features for hook scripts), where I just check that the external interface (enp1s0
for me) is triggering the hook, and enable proxy_ndp
this way keeps it enabled (there's probably some small downtime when it's not that could cause a brief connectivity issue for requests):
/etc/networkd-dispatcher/configured.d/ipv6-ndp.sh
:
#! /bin/bash
TARGET_IFACE='enp1s0'
if [[ ${IFACE} == "${TARGET_IFACE}" ]]
then
sysctl "net.ipv6.conf.${TARGET_IFACE}.proxy_ndp=1"
fi
I think it is gotchas like above, where there is a lot of behind the scenes configuration going on (through layers of network config) that gets triggered by such a simple event as starting or stopping a container. Not likely something Docker can fix itself on it's end that easily AFAIK, as there are many different ways a system may configure and manage a network.
A better fix in my case at least, would be for Netplan to support configuring networkd
with proxy_ndp=1
, like they support with accept-ra: true
for accept_ra=2
(additional gotcha, networkd
enables this, but disables the kernel setting to manage it's own internal implementation, which can be confusing/unexpected at first glance).
If you don't need containers to be assigned publicly routable IPv6 GUA addresses, and instead are ok with a single IPv6 GUA on your external interface that you NAT to containers like you do with IPv4, then it is much simpler to just create a docker network with a ULA subnet, and assign containers IPv6 ULA addresses. These are private (just like the IPv4 addresses in docker networks typically are, thus requiring NAT), and avoids the NDP issue entirely.
If you have IPv4 in use, publishing ports will bypass any firewall config rules anyway and have containers conflict to bind the same port to the external interface public IPv4 address. The main benefit for IPv6 then isn't as useful unless you're on a IPv6 only host, but that's less likely as many still need to support IPv4 client connections?
I think IPv6 ULA networks weren't supported that well when this issue was created, but we've had ip6tables
support in /etc/docker/daemon.json
for a while now which has received fixes through the 20.10.x
series since it was introduced. As the info from original report above shows, the docker host wasn't new enough at the time to support this, but should be capable of it today and is what I would recommend most adopt for IPv6 support.
Expected behavior
When Docker with IPv6 (sharing an IPv6 /64 prefix between hosts and containers) is launching a container, I expect it to be accessible from the Internet and norther neighbour docker hosts. This should be possible when NDP proxying is configured (adding a container IPv6 address to IPv6 neighbour relation to the eth0 upstream interface). Otherwise the containers cannot be access outside the docker host.
Actual behavior
A Docker host with a container IPv6 network (/80 prefix) in the same IPv6 subnet (/64) is not able to access other Docker hosts or from the Internet.
The solution to be able to route to hosts on the /80 container network is to manual add each container to the NDP proxy table:
The official docker docs are inconsisten here, as the old Docker v 17.09 (https://docs.docker.com/v17.09/engine/userguide/networking/default_network/ipv6/) describes how to get around the NDP proxy problem, by using ndppd to auto configure NDP proxy for containers. However, this solution is very unstable and not suited for production - we have experienced many "hick-ups" where IPv6 connectivity is lost. The current docker documentation (https://docs.docker.com/config/daemon/ipv6/) only describes that IPv6 should be enabled in docker - nothing else. After reading ALLOT on google, i have got the impression that docker-proxy should handle NDP proxy configuration, but it is not working.
Steps to reproduce the behavior
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)