Open flecno opened 9 years ago
This will definitely happen at some point. I have native IPv6 at my place (one of the few things Comcast does right...) and will definitely experiment even more once DigitalOcean adds IPv6 in their San Francisco data center.
As for the pull request you linked to, it looks like a mix of support for their dockerd daemon and the containers. I don't really care about the dockerd daemon using IPv6, we just need to enable the containers. OpenVPN looks ready, ip6tables is certainly ready.
An outstanding question with IPv6 in my mind is what to do with IP address and routing. NAT? IPv6 Prefix Delegation and real routing?
For now you can try to use host networking instead of bridged networking. Does that fit into your architecture?
When you run the container in privileged mode you can set the IPv6 address from the inside.
Related to https://github.com/docker/docker/pull/8947
Now that docker has native IPv6 since 1.5 this should be a bit easier.
Played with this very briefly tonight but didn't have proper ipv6 networking in the container.
The host works just fine on IPv6. Digital Ocean provides a /124 subnet (wth, how weak is that?!?). I was able to assign additional IPv6 addresses to the host and reach them. I restarted Docker daemon with ipv6 and fixed /125 subnet block for container assignment. The openvpn container would come up with an IPv6 address, but simple ipv6 pings never got returned. Saw them leaving the host machine's interface with tcpdump. Not sure why there was no response.
Docker recommends /80 subnet size, and I think the IPv6 recommendation is to never handout subnets smaller then /64, so I don't know what the hell Digital Ocean is doing with /124 subnet.
Details @ http://docs.docker.com/v1.5/articles/networking/#ipv6
I'll play around with it sometime in the future, not sure when though.
You want to have a look at the current master branch docs: http://docs.master.dockerproject.com/articles/networking/#using-ndp-proxying
Here I have written a How To for using IPv6 on Digital Ocean.
OpenVPN won't work with subnets smaller than /112 so this is a pointless struggle.
HE.net (TunnelBroker) gives you a routed /48 for free, so you can assign a /64 on all of your interfaces and then some. In Amsterdam their server is 1ms away from DigitalOcean so virtually no overhead.
The challange is "routing" a subnet inside a container for OpenVPN. You also need to set net.ipv6.conf.all.forwarding
to 1
which requires remounting /proc/sys
or running the container in privileged mode. Being lazy I just used host networking. But there are probably better solutions.
Hey all, had a chance to hack on this a little bit this weekend on the dev branch The result is a proper /64 subnet dedicated and managed by the OpenVPN tun interface via TunnelBroker (thanks @Nightling).
Basic docs @ https://github.com/kylemanna/docker-openvpn/blob/dev/docs/ipv6.md
If anyone is interested in testing it out and giving me feedback I'd appreciate it. Biggest hack/problem is adding the static route after the docker container is initialized. Uses systemd (i.e. not Ubuntu yet?) ExecStartPost to accomplish, not sure how to do this with upstart and don't care myself to find out since systemd is the future! At least with this approach we can avoid the IPv6 NDP that drove me nuts by using proper subnet forwarding.
Quick todo list off the top of my head:
ovpn_getclient
dev
docker image tags in the docs and systemd.service fileHi, thanks for all this. Any clue how to achieve the same with native IPv6, without using TunnelBroker? I tried to follow the docs and understand what is done behind the step by step guide but I don't get it.
I haven't looked at this in a while, but most of the frustration was wrestling with docker to allocate a /64 or better to each docker container and then telling OpenVPN to use it.
In the meantime, I tried to reproduce exactly what is described in the docs: https://github.com/kylemanna/docker-openvpn/blob/dev/docs/ipv6.md
I am not sure to understand what is being achieved in step 4 with the instance. I have tried to run the code as is (I previously ran all the code alike and confirm it worked properly) and get back an error that I don't understand:
# systemctl status docker-openvpn@test0
● docker-openvpn@test0.service - OpenVPN Docker Container
Loaded: loaded (/etc/systemd/system/docker-openvpn@.service; disabled)
Active: activating (auto-restart) (Result: exit-code) since Sat 2016-01-02 16:46:59 CET; 6s ago
Docs: https://github.com/kylemanna/docker-openvpn
Process: 1377 ExecStopPost=/bin/sh -c test -z "$IP6_PREFIX" && exit 0; ip route del $IP6_PREFIX dev docker0 (code=exited, status=2)
Process: 1364 ExecStartPost=/bin/sh -c test -z "${IP6_PREFIX}" && exit 0; sleep 1; ip route replace ${IP6_PREFIX} via $(docker inspect -f "{{ .NetworkSettings.GlobalIPv6Address }}" $NAME ) dev docker0 (code=exited, status=1/FAILURE)
Process: 1363 ExecStart=/usr/bin/docker run --rm --privileged --volumes-from ${DATA_VOL}:ro --name ${NAME} -p ${PORT} ${IMG} ovpn_run $ARGS (code=exited, status=1/FAILURE)
Process: 1359 ExecStartPre=/bin/sh -c test -z "$IP6_PREFIX" && exit 0; sysctl net.ipv6.conf.all.forwarding=1 (code=exited, status=0/SUCCESS)
Process: 1352 ExecStartPre=/usr/bin/docker pull $IMG (code=exited, status=0/SUCCESS)
Process: 1346 ExecStartPre=/usr/bin/docker rm -f $NAME (code=exited, status=1/FAILURE)
Main PID: 1363 (code=exited, status=1/FAILURE)
Jan 02 16:46:59 otari systemd[1]: Unit docker-openvpn@test0.service entered failed state.
Do you have any idea what went wrong here? Thanks!
If it can be useful, here is my feedback on setting up IPv6 on my dedicated server. My provider is assigning a /64 to my host but is actually routing and advertising only a /128. Therefore, I have to resort to NDP proxying:
sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv6.conf.all.proxy_ndp=1
ip6tables -I FORWARD 8 -i docker0 -o eth0 -j ACCEPT # should be done with firewalld
ip6tables -I FORWARD 9 -i eth0 -o docker0 -j ACCEPT # should be done with firewalld
I give my docker containers a /65 subnet so I run docker with --ipv6 --fixed-cidr-v6="a:b:c:d:8000::/65"
and proxy through NDP the subrange that will be actually used:
for a in {0..9} a b c d e f ; do
for b in {0..9} a b c d e f ; do
ip -6 neigh add proxy a:b:c:d:8000:cafe:babe:${a}${b} dev eth0
done
done
I generate the config with Google's DNS and the v6 gateway I will push:
docker run --rm -it kylemanna/openvpn ovpn_genconfig \
-v $OVPN_DATA:/etc/openvpn \
-u udp://domain.tld \
-n "2001:4860:4860::8888 2001:4860:4860::8844 8.8.8.8 8.8.4.4" \
-p "route-ipv6 2000::/3 a:b:c:d:8000:B16B:00B5:1"
I run the container giving the subnet I want the OpenVPN clients to use:
docker run --name ovpn \
--mac-address=CA:FE:BA:BE:00:10 \
--cap-add=NET_ADMIN \
--sysctl net.ipv6.conf.default.forwarding=1 \
--sysctl net.ipv6.conf.all.forwarding=1 \
-p 1194:1194/udp \
-v $OVPN_DATA:/etc/openvpn \
-d kylemanna/openvpn\
ovpn_run --server-ipv6 a:b:c:d:8000:B16B:00B5::/112
I do not run it in privileged mode, but I use --sysctl
with NET_ADMIN
instead for forwarding. This action deprecates the sysctl block in bin/ovpn_run.
The /112 subnet is the smallest accepted by OpenVPN. I tried a /120 but it was refused.
This will typically give me a gateway with IPv6 a:b:c:d:8000:B16B:00B5:1
and a first client with IPv6 a:b:c:d:8000:B16B:00B5:1000
.
Thus, I now have to proxy the OpenVPN subnet with NDP:
for a in {0..9} a b c d e f ; do
for b in {0..9} a b c d e f ; do
ip -6 neigh add proxy a:b:c:d:8000:B16B:00B5:00${a}${b}
ip -6 neigh add proxy a:b:c:d:8000:B16B:00B5:10${a}${b}
done
done
I also have to set a route on the host for the subnet via the container's IP
ip -6 route add a:b:c:d:8000:B16B:00B5::/112 via $(docker inspect -f "{{ .NetworkSettings.GlobalIPv6Address }}" ovpn)
Those host commands can also be put inside the systemd service. Obviously, this NDP proxy story is not great; I will try to switch to a host that provides a native /64 whenever I find a good one (those are very rare!).
I run the client and check that I have IPv6 connectivity:
% ip -6 a
8: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 state UNKNOWN qlen 100
inet6 a:b:c:d:8000:B16B:00B5:1000/112 scope global
valid_lft forever preferred_lft forever
% ip -6 r
a:b:c:d:8000:B16B:00B5:0/112 dev tun0 proto kernel metric 256 pref medium
2000::/3 dev tun0 metric 1024 pref medium
2000::/3 dev tun0 proto static metric 1026 pref medium
% ping6 ipv6.google.com
PING ipv6.google.com(par21s04-in-x0e.1e100.net (2a00:1450:4007:811::200e)) 56 data bytes
64 bytes from par21s04-in-x0e.1e100.net (2a00:1450:4007:811::200e): icmp_seq=1 ttl=54 time=16.4 ms
I enjoy this IPv6 awesomeness wherever I go, especially behind dirty IPv4-only connections. Cheers.
@ykzk Thanks for the awesome write-up. It took alot of time to put all that together, and the lack of proper support by ISPs and the Docker Engine is not helping.
I'm trying to figure out a way to improve the IPv6 support for this Docker Image, but I fear there won't be that many people that take the plunge after they see how much work it is (except the truly committed!). Perhaps the best move is to keep things as they are and let the truly interested stumble across this issue.
Any thoughts?
The cleanest implementation I've seen is routing through HE and giving the assigned subnet to the image to make it easy and follow all the expected routing rules.
@kylemanna Well, to be honest, the container is already functional as it is. Except for the --sysctl parameters, all the work I did was set up my host and my docker installation for my IPv6 network. Using tunnelbroker is nice for people who don't have any IPv6 connectivity, but in my humble opinion, it is kind of pointless until the ISPs really start to provide native IPv6. I mean, cobbling stuff together like tunnelling is nice but it is not really getting us anywhere; we're merely playing around, not making real networking. I wouldn't have set up IPv6 if all the options I had were fake ones like NAT64, 6to4 or tunnelbroker. I don't think the docker engine can really do much more either (NATting IPv6 would be a heresy). You already went above and beyond the call of duty by offering the tunnelbroker alternative, I say we wait until we manage to pressure the providers enough so that we get real v6 networking. Thank you for your work.
I set this up on a RHEL 7 server with ipv6, and it mostly works after a few workarounds. Here are my findings:
The systemd script needs to be changed from docker.socket to docker.service
I allocated one /64 block to docker in docker's /etc/docker/daemon.json and a different one to the vpn server. (This is IPV6_PREFIX and the server-ipv6 arg in the systemd file)
Docker containers aren't allowed to forward ipv6 to the main interface by default in firewalld. After some googling, I found that this works: "firewall-cmd --direct --add-rule ipv6 filter FORWARD_direct 0 -i docker0 -o em1 -j ACCEPT". This is a common problem with firewalld and any virtual interface. Even if you set up the vpn server directly without docker, you'll have a tun0 interface, and you'll need a similar rule. I'm surprised that this is the default behaviour.
I think the sleep value in the systemd file needs to be a bit higher. After the container is up, it takes some time before you can query to get the ip inside the container
I still need to run the container privileged, although the systemd file has NET_ADMIN
On my client, on the tun interface, I see an address with /64. Shouldn't this have been /128 ?
This will not work if I want to set up the vpn client on my router, so that it can in turn assign ipv6 addresses to the clients. Is there any easy way to extend this ? It likely involves more config, starting with passing a /56 block to the openvpn server and a per client ccd config, so that it can send an actual /64 block to each client. I don't know ipv6 too well though.
Do you have any idea how to include IPv6 support into the configuration? I have read the IPv6 wiki page and it looks like the OpenVPN configuration is easy. I see docker doesn't support v6 native for now. But certainly there are PR (https://github.com/docker/docker/pull/8947) to support it. As a workaround I tried t use the LXC driver, set an ip address to docker0 and start the docker container with LXC-Flags. As a result docker can't start the daemon because the old containers could not found. So I rolled the configuration back.
Have you any experience with IPv6? Especially this OpenVPN container and docker? Is there a way to route all IPv6 traffic through v4? Android ignores the v4 tunnel, if there is native v6 connectivity.