Open stoli412 opened 3 years ago
Keeping the existing IPv4 block and adding an additional WIREGUARD_ADDRESS line with the IPv6 block
Note you can't set an env twice, you need to have it as a csv, so like WIREGUARD_ADDRESS=1.2.3.4/32,74:fe:\128
.
container fails to complete startup because it can't resolve IP addresses and never connects to Mullvad's servers
Actually it does connect, this looks like it's blocked by the firewall (ip6tables
).
I'll comment back with a test image which will allow ipv6 through the firewall. For now, maybe try with -e FIREWALL=off
and see if your ipv6 leaks or not?
There is also another aspect: for now all mullvad endpoints used in gluetun are IPv4. I'm wondering if you can tunnel ipv6 if you use the IPv4 endpoint. I'll change it to IPv6 eventually if you set some env like WIREGUARD_IPV6_ENDPOINT=yes
for example, but that might be irrelevant to this issue.
It looks like WIREGUARD_ADDRESS
won't accept CSV. I get this error:
2021/08/28 12:42:06 ERROR cannot read VPN settings: cannot read Wireguard settings: environment variable WIREGUARD_ADDRESS: invalid CIDR address: 10.xxx.xxx.xxx/32,fc00:xxxx:xxxx:xxxx::x:xxxx/128
Putting each block in quotes doesn't work either.
I tried disabling the firewall as suggested and using only my IPv6 block, and the gluetun log shows my public IPv4 address instead of the VPN:
2021/08/28 12:58:26 INFO ip getter: Public IP address is xxx.xxx.xxx.xxx (United Kingdom, England, Manchester)
When I then attach another container to gluetun for testing, my public IPv4 and IPv6 are displayed on ifconfig.io using curl
and curl -6
.
I think Mullvad will tunnel IPv6 over an IPv4 endpoint. If you use their configuration tool you can choose between IPv4 and IPv6 for the connection protocol and then for tunnel traffic you can choose IPv4, IPv6 or both. Here's an example of a generated conf file with IPv4 connection protocol and IPv4/6 tunnel traffic:
[Interface]
PrivateKey = [redacted]
Address = 10.xxx.xxx.xxx/32,fc00:xxxx:xxxx:xxxx::x:xxxx/128
DNS = 193.138.218.74
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
[Peer]
PublicKey = [redacted]
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = [redacted]:51820
Since AllowedIPs = 0.0.0.0/0,::0/0
is already set in the code, the only difference is the [Interface]
's Address
having only the IPv4 address and not the IPv6 one. So I think setting the IPv4 + IPv6 addresses for that WIREGUARD_ADDRESS
might just make it work.
c6fedd9214871424d4a05269472e96e632eaafce just added support for multiple addresses so you should be able to try.
There might be some IPv6 firewall blocking things though, or maybe not, let me know π
Just tried it out and it's accepting both the IPv4 and 6 blocks in the WIREGUARD_ADDRESS
variable now. With the firewall enabled, I get no connectivity in connected containers. With the firewall disabled, it's using my public IPv4/6 instead of the vpn.
Log - firewall on: _gluetun_logs-firewallon.txt
Log - firewall off: _gluetun_logs-firewalloff.txt
I think the container was broken as it wouldn't set any address at all for Wireguard. I just fixed it in https://github.com/qdm12/gluetun/commit/61afdce788c8eb8896bb15a3276808ac819055f5 sorry about that. Please try with the corresponding latest image (aka re-pull latest) once it's built in a few minutes (https://github.com/qdm12/gluetun/actions/runs/1177909686)
I think we're getting closer!
Firewall on: uses vpn IPv4 address, no IPv6 connectivity Firewall off: uses vpn IPv4 address, uses public IPv6 address
Looking at the gluetun log, it looks like it's still not setting a default route for IPv6? _gluetun_logs.txt
In both cases my attached container is showing the correct IP addresses on wg0
:
root@307a3f836eaf:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: wg0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1420 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.66.38.185/32 scope global wg0
valid_lft forever preferred_lft forever
inet6 fc00:bbbb:bbbb:bb01::3:26b8/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::e310:7318:d58d:46bc/64 scope link stable-privacy
valid_lft forever preferred_lft forever
428: eth0@if429: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:10:05:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.5.2/24 brd 172.16.5.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd5f:c26e:7746:f664::2/64 scope global nodad
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe10:502/64 scope link
valid_lft forever preferred_lft forever
Yes it's definitely ip6tables
blocking something.
If you don't mind me asking, can you get me what you get when you try to get something over IPv6, with the firewall enabled, with docker exec gluetun ip6tables -nvL
We should be able to more or less easily spot where the IPv6 gets blocked.
ip6tables shows that it is not being blocked.
Chain INPUT (policy DROP 53 packets, 4947 bytes)
pkts bytes target prot opt in out source destination
61 6780 ACCEPT all lo * ::/0 ::/0
3 240 ACCEPT all * * ::/0 ::/0 ctstate RELATED,ESTABLISHED
11 720 ACCEPT tcp wg0 * ::/0 ::/0 tcp dpt:57317
0 0 ACCEPT udp wg0 * ::/0 ::/0 udp dpt:57317
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 2168 packets, 222K bytes)
pkts bytes target prot opt in out source destination
61 6780 ACCEPT all * lo ::/0 ::/0
67 4400 ACCEPT all * * ::/0 ::/0 ctstate RELATED,ESTABLISHED
10 536 ACCEPT all * wg0 ::/0 ::/0
I tried probing port 57317 from the host but still no response, probing inside of gluetun container works.
Ah ok so Firewall off: uses vpn IPv4 address, uses public IPv6 address
probably means it's using the non VPN IPv6 address.
I think I found what's missing. 59a3a072e0fd7cead8ec78c74f284012fd124d0a adds IPv6 routing for Wireguard. It only does it if --sysctl net.ipv6.conf.all.disable_ipv6=0
is passed to the container (so gluetun checks dynamically if there is any IPv6 route in the system to avoid erroring if it's not enabled).
I haven't tried checking my ipv6 address, do you mind trying it please (pull the latest image)?
I'm getting the same results with the latest image.
I'm afk right now, but maybe try with the firewall disabled, is the ipv6 address the one from the vpn server or yours? Maybe this time the firewall does block it π€
Does not work even with firewall off.
I have gluetun connected to my router and have access to the network, nothing points to it being a router issue.
With the firewall off in gluetun ipv4 responds to icmp while ipv6 does not reply, I have no idea why its not responding, ip6tables showing that its recieving packets π€·ββοΈ
I have gluetun connected to my router and have access to the network, nothing points to it being a router issue.
Route issue, not the same as router π
Does not work even with firewall off. With the firewall off in gluetun ipv4 responds to icmp while ipv6 does not reply
But you mentioned previously you had an IPv6 address when using FIREWALL=off
right?
A few more questions:
Can you try using for example:
docker run -it --rm --sysctl net.ipv6.conf.all.disable_ipv6=0 alpine:3.14 wget -qO- https://api6.ipify.org
See if you get an IPv6 address? Mine gives me wget: can't connect to remote host: Network unreachable
but that's most likely because IPv6 is disabled on my network.
With the firewall off in gluetun ipv4 responds to icmp while ipv6 does not reply
I'm not sure pinging gluetun is relevant here. The issue here is about tunneling ipv6 through Wireguard, and it should be working I believe with the commit I mentioned. Try exec'ing in gluetun and run wget -qO- https://api6.ipify.org
see if you get the vpn server ipv6 address. Also that tunneling may not work if the vpn server doesn't support ipv6 π€
Yeah tried exec'ing wget -qO- https://api6.ipify.org
in gluetun no response π
I tested to see if IPv6 works in other containers and they are all good, its just gluetun thats not working.
As discussed in #134 (here) it would be great if IPv6 tunnelling over Wireguard could be enabled for providers who support it (eg, Mullvad).
I've done some testing with the existing image and as expected IPv6 routing does not work. In my docker-compose file, I've tried:
WIREGUARD_ADDRESS
line with the IPv6 blockIn both cases the container fails to complete startup because it can't resolve IP addresses and never connects to Mullvad's servers. (Logs: _gluetun-wireguard-ipv6test_gluetun-wireguard_1_logs.txt)
My gluetun docker-compose looks like this: