qdm12 / gluetun

VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
https://hub.docker.com/r/qmcgaw/gluetun
MIT License
7.42k stars 350 forks source link

Bug: Wireguard wouldnt recover from a dropped connection. #2471

Open Darkfella91 opened 1 week ago

Darkfella91 commented 1 week ago

Is this urgent?

No

Host OS

Talos OS

CPU arch

x86_64

VPN service provider

ProtonVPN

What are you using to run the container

Kubernetes

What is the version of Gluetun

Running version v3.39.0 built on 2024-08-09T08:07:23.827Z (commit 09c47c7)

What's the problem šŸ¤”

Basically each time my internet connection drops for any reason or if my dns server isnt available, the health check restarts the vpn connection but it fails to connect after that and goes in loops . Only manually killing the pod would restore my vpn connection.

Share your logs (at least 10 lines)

2024-09-07T10:20:15Z INFO [vpn] retrying in 30s
2024-09-07T10:20:15Z DEBUG [wireguard] deleting link...
2024-09-07T10:20:27Z INFO [healthcheck] program has been unhealthy for 11s: restarting VPN
2024-09-07T10:20:27Z INFO [healthcheck] šŸ‘‰ See https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md
2024-09-07T10:20:27Z INFO [healthcheck] DO NOT OPEN AN ISSUE UNLESS YOU READ AND TRIED EACH POSSIBLE SOLUTION
2024-09-07T10:20:45Z DEBUG [wireguard] Wireguard server public key: VNNO5MYorFu1UerHvoXccW6TvotxbJ1GAGJKtzM9HTY=
2024-09-07T10:20:45Z DEBUG [wireguard] Wireguard client private key: 2MD...HY=
2024-09-07T10:20:45Z DEBUG [wireguard] Wireguard pre-shared key: [not set]
2024-09-07T10:20:45Z INFO [firewall] allowing VPN connection...
2024-09-07T10:20:45Z DEBUG [firewall] iptables --delete OUTPUT -d 149.88.27.193 -o eth0 -p udp -m udp --dport 51820 -j ACCEPT
2024-09-07T10:20:45Z DEBUG [firewall] iptables --delete OUTPUT -o tun0 -j ACCEPT
2024-09-07T10:20:45Z DEBUG [firewall] ip6tables --delete OUTPUT -o tun0 -j ACCEPT
2024-09-07T10:20:45Z DEBUG [firewall] iptables --append OUTPUT -d 185.159.157.23 -o eth0 -p udp -m udp --dport 51820 -j ACCEPT
2024-09-07T10:20:45Z DEBUG [firewall] iptables --append OUTPUT -o tun0 -j ACCEPT
2024-09-07T10:20:45Z DEBUG [firewall] ip6tables --append OUTPUT -o tun0 -j ACCEPT
2024-09-07T10:20:45Z INFO [wireguard] Using available kernelspace implementation
2024-09-07T10:20:45Z INFO [wireguard] Connecting to 185.159.157.23:51820
2024-09-07T10:20:45Z DEBUG [wireguard] closing controller client...
2024-09-07T10:20:45Z DEBUG [wireguard] shutting down link...
2024-09-07T10:20:45Z ERROR [vpn] adding IPv6 rule: adding rule ip rule 101: from all to all table 51820: file exists
2024-09-07T10:20:45Z INFO [vpn] retrying in 1m0s
2024-09-07T10:20:45Z DEBUG [wireguard] deleting link...
2024-09-07T10:20:47Z INFO [healthcheck] program has been unhealthy for 16s: restarting VPN
2024-09-07T10:20:47Z INFO [healthcheck] šŸ‘‰ See https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md
2024-09-07T10:20:47Z INFO [healthcheck] DO NOT OPEN AN ISSUE UNLESS YOU READ AND TRIED EACH POSSIBLE SOLUTION

Share your configuration

env:
          VPN_SERVICE_PROVIDER: "protonvpn"
          VPN_TYPE: "wireguard"
          SERVER_CITIES: "Zurich"
          PORT_FORWARD_ONLY: "on"
          WIREGUARD_PRIVATE_KEY:
            secretKeyRef:
                    expandObjectName: false
                    name: vpn-config
                    key: private-key
          VPN_PORT_FORWARDING: "on"
          VPN_PORT_FORWARDING_PROVIDER: protonvpn
          VPN_PORT_FORWARDING_LISTENING_PORT: "6881"
          FIREWALL_DEBUG: "on"
          LOG_LEVEL: "debug"
        killSwitch: true
github-actions[bot] commented 1 week ago

@qdm12 is more or less the only maintainer of this project and works on it in his free time. Please:

Darkfella91 commented 3 days ago

Tried manually deleting that ip rule and then gluetun container is able to restore the connection , but i have no idea why it isnt cleaning it up automatically .

Darkfella91 commented 3 days ago

using this as a workaround for now ```lifecycle: postStart: exec: command:

theopilbeam commented 1 day ago

using this as a workaround for now lifecycle: postStart: exec: command: - /bin/sh - -c - ip rule del table 51820 || true

seeing the same issue, but for me the ip6 rules aren't being cleaned up - using (ip rule del table 51820; ip -6 rule del table 51820) || true as my postStart

Darkfella91 commented 1 day ago

using this as a workaround for now lifecycle: postStart: exec: command: - /bin/sh - -c - ip rule del table 51820 || true

I have disabled ipv6 for my pod, that's why i have only ipv4 rules