Open buroa opened 1 year ago
Why do you need this? What's the use case?
@qdm12 If I use mullvad
for example, it attempts to connect to a lot of IPv6-enabled wireguard servers -- but I don't actually have IPv6 enabled in my homelab -- so it shouldn't even attempt this. It's causing a ton of delay until it actually grabs a IPv4-enabled server.
Ok that's interesting, maybe a better way to address this would be to fix the IPv6 detection mechanism 😉
The mechanism right now is https://github.com/qdm12/gluetun/blob/3100cc1e5ee7d6523cc02e1c874a271713c13395/internal/netlink/ipv6.go#L9 and it basically goes through the routes of each links, and if one route is IPv6, it assumes IPv6 is supported. What's the output of
docker run -it --rm alpine ip route
docker run -it --rm alpine ip -6 route
Maybe you do have IPv6 enabled on your machine, but not on your network?? In which case, adding an extra step to query an IPv6 address and see if it works would do I think 🤔
1648 @qriff can you mention in more details this in #1563 and answer:
1. Do you think there would be a more reliable way to detect IPv6? One could try creating an unattached ipv6 link and see if it works I guess? 2. If not, we can go ahead and add an optional env variable to enable/disable IPv6 (defaulting to `auto`).
So AFAIK, YMMV et al, the definition of IPv6 detection is the (enabled) state in Sysctl.
https://serverfault.com/questions/660979/how-to-disable-ipv6-support-in-linux-entirely
I know it is not specifically what you asked, but...
Regarding autodetection, the multitude of ways how IPv6 can be utilized is quite wast and just like the realization in #1648 indicates the current detection fails due to being done before actually setting the IPv6 address and getting a IPv6 route (no local IPv6 connectivity, only remote (VPN) IPv6 connectivity, proxies, net-containers, etc). It might not even matter if IPv6 is configured or not (even if only enabled), firewall rules should still be set nonetheless etc, principally a IPv6 address might pop-up at any time and the default configuration should account for it.
https://docs.docker.com/engine/reference/run/#network-settings https://stackoverflow.com/questions/55399695/attaching-a-docker-container-to-another-containers-network-with-net-container
AFAIK, YMMV et al, a sidenote on the technicalities of IPv6 disabling/enabling, in containers it is inherited by the container from the host, or kernel tuned via sysctl by the container engine (also remebering container namespacing). Obviously a complimentary environment variable is fully valid in ADDITION to the actual implementation, (it is considered courteous to notify of this circumstance in the setting description/documentation, for guidance toward proper configuration)
The decision made here might define the longevitiy and suitability of this project (i.e. compliance, compatibility).
ip -6 route
ip r && ip -6 route
default via 10.244.1.220 dev eth0 mtu 9000
10.244.1.220 dev eth0 scope link
fe80::/64 dev eth0 proto kernel metric 256 pref medium
@qriff interesting points, I am thinking to add an ipv6 dummy route to test for ipv6 support, instead of checking existing routes for IPv6 source or destination.
Would you have any suggestion of what ip6 route add ...
you would use to not affect existing routing tables (not a routing expert to be honest) 🤔
This also happens with airvpn as I have blocked IPv6 outbound (via network policy in Kubernetes), and it stil tries to connect via v6.
For reference (inside gluetun container):
# ip r && ip -6 r
default via 10.244.4.77 dev eth0 mtu 1500
10.0.0.0/8 via 10.244.4.77 dev eth0
10.244.4.77 dev eth0 scope link
172.16.2.0/24 dev vxlan0 proto kernel scope link src 172.16.2.1
192.168.0.0/16 via 10.244.4.77 dev eth0
fd7d:76ee:e68f:a993:16a9:9a73:9cd9:c35c dev wg0 proto kernel metric 256 pref medium
fddf:f7bc:9670:4::bd91 dev eth0 proto kernel metric 256 pref medium
fddf:f7bc:9670:4::c7a7 dev eth0 metric 1024 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev vxlan0 proto kernel metric 256 pref medium
default via fddf:f7bc:9670:4::c7a7 dev eth0 metric 1024 mtu 1500 pref medium
Can we please have the ability to disable the ipv6 checks? I can't use the firewall and port forward at all because Synology the lame buggers won't update the ipv6 table... So many people are having this issue :( <3
ERROR [vpn] redirecting port in firewall: redirecting port: redirecting IPv6 source port 33799 to destination port 6881 on interface tun0: command failed: "ip6tables-legacy -t nat --append PREROUTING -i tun0 -d ::1 -p tcp --dport 33799 -j REDIRECT --to-ports 6881": ip6tables v1.8.10 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Just uploading my docker file.
custom.conf
# Disable IPv6
pull-filter ignore "route-ipv6"
pull-filter ignore "ifconfig-ipv6"
Custom docker network
docker network create --subnet 172.19.0.0/16 mynet
docker-compose.yml
version: '3.7'
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 172.19.0.0/16
x-common-variables: &common-variables
TZ: ${TZ}
PUID: ${PUID}
PGID: ${PGID}
UMASK: ${UMASK}
gluetun-p2p:
image: qmcgaw/gluetun:latest
container_name: gluetun-p2p
hostname: gluetun-p2p
privileged: true
cap_add:
- NET_ADMIN
environment:
<<: *common-variables
VPN_SERVICE_PROVIDER: protonvpn
VPN_TYPE: openvpn
VPN_PORT_FORWARDING: on
VPN_PORT_FORWARDING_PROVIDER: ${VPN_PORT_FORWARDING_PROVIDER}
VPN_PORT_FORWARDING_LISTENING_PORT: ${VPN_P2P_PORT_FORWARDING_LISTENING_PORT}
OPENVPN_USER: ${OPENVPN_USER}
OPENVPN_PASSWORD: ${OPENVPN_PASSWORD}
OPENVPN_CUSTOM_CONFIG: /gluetun/custom.conf
OPENVPN_PROTOCOL: tcp
FREE_ONLY: off
FIREWALL_OUTBOUND_SUBNETS: 172.19.0.0/16,192.168.24.0/24
FIREWALL_VPN_INPUT_PORTS: 6881
FIREWALL_INPUT_PORTS: 6881
BLOCK_MALICIOUS: on
DOT_PRIVATE_ADDRESS: 127.0.0.1/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16
LOG_LEVEL: ${LOG_LEVEL:-info}
ports:
- ${QBITTORRENT_WEBUI_PORT}:8080
- ${QBITTORRENT_INCOMING_PORT}:6881
sysctls:
- net.ipv6.conf.all.disable_ipv6=1
- net.ipv6.conf.default.disable_ipv6=1
restart: unless-stopped
volumes:
- /volume1/docker/gluetun:/gluetun
- /volume1/docker/gluetun/config/custom.conf:/gluetun/custom.conf:ro
networks:
- mynet
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
network_mode: "service:gluetun-p2p"
environment:
<<: *common-variables
PUID: ${PUID}
PGID: ${PGID}
TZ: ${TZ}
WEBUI_PORT: ${QBITTORRENT_WEBUI_PORT}
TORRENTING_PORT: ${QBITTORRENT_INCOMING_PORT}
volumes:
- /volume1/docker/qbittorrent/config:/config
- /volume1/Media:/media
depends_on:
gluetun-p2p:
condition: service_healthy
restart: unless-stopped
qbittorrent_natmap:
image: ghcr.io/soxfor/qbittorrent-natmap:latest
container_name: qbittorrent_natmap
environment:
<<: *common-variables
TZ: ${TZ}
QBITTORRENT_USER: ${QBITTORRENT_USER}
QBITTORRENT_PASS: ${QBITTORRENT_PASSWORD}
QBITTORRENT_PORT: 8080
VPN_CT_NAME: gluetun-p2p
VPN_GATEWAY: ${VPN_GATEWAY}
depends_on:
qbittorrent:
condition: service_started
gluetun-p2p:
condition: service_healthy
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I have also followed this and same issue :( - https://serverfault.com/questions/660979/how-to-disable-ipv6-support-in-linux-entirely
What's the feature 🧐
Allow us to complete disable IPv6, instead of automatically checking if our system supports it.
https://github.com/qdm12/gluetun/blob/master/internal/netlink/ipv6.go#L9 https://github.com/qdm12/gluetun/blob/master/internal/provider/utils/wireguard.go#L12 https://github.com/qdm12/gluetun/blob/master/internal/wireguard/address.go#L14 https://github.com/qdm12/gluetun/blob/master/internal/configuration/sources/env/wireguard.go#L12
Something like a env variable that goes into the user settings. Looks like the wireguard is already checking that setting so as long as we can pass it as an override, this will work for anyone that has the same setup.
Extra information and references
No response