qdm12 / gluetun

VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
https://hub.docker.com/r/qmcgaw/gluetun
MIT License
7.23k stars 345 forks source link

Bug: ProtonVPN port forwarding looses connection #1882

Closed clemone210 closed 7 months ago

clemone210 commented 11 months ago

Is this urgent?

No

Host OS

Ubuntu

CPU arch

x86_64

VPN service provider

ProtonVPN

What are you using to run the container

docker-compose

What is the version of Gluetun

latest docker image

What's the problem πŸ€”

I use gluetun to connect plex to protonvpn with OpenVPN + port forwarding.

When starting the container everything works. The container gets a opened port ad uses this to allow remote access.

Somehow after a few minutes (10-15min) the port connection is not possible anymore. Within Plex, no remote access is possible anymore. After restarting gluetun and Plex there will be a new port which is used and it works again.

Anything I can provide in order to resolve this?

Share your logs (at least 10 lines)

========================================
========================================
=============== gluetun ================
========================================
=========== Made with ❀️ by ============
======= https://github.com/qdm12 =======
========================================
========================================

Running version latest built on 2023-09-23T13:31:26.334Z (commit aa6dc78)

πŸ”§ Need help? https://github.com/qdm12/gluetun/discussions/new
πŸ› Bug? https://github.com/qdm12/gluetun/issues/new
✨ New feature? https://github.com/qdm12/gluetun/issues/new
β˜• Discussion? https://github.com/qdm12/gluetun/discussions/new
πŸ’» Email? quentin.mcgaw@gmail.com
πŸ’° Help me? https://www.paypal.me/qmcgaw https://github.com/sponsors/qdm12
2023-09-25T14:30:39+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:39+02:00 INFO [routing] local ethernet link found: eth0
2023-09-25T14:30:39+02:00 INFO [routing] local ipnet found: 172.20.0.0/16
2023-09-25T14:30:40+02:00 INFO [storage] creating /gluetun/servers.json with 17689 hardcoded servers
2023-09-25T14:30:40+02:00 INFO Alpine version: 3.18.3
2023-09-25T14:30:40+02:00 INFO OpenVPN 2.5 version: 2.5.8
2023-09-25T14:30:40+02:00 INFO OpenVPN 2.6 version: 2.6.5
2023-09-25T14:30:40+02:00 INFO Unbound version: 1.17.1
2023-09-25T14:30:40+02:00 INFO IPtables version: v1.8.9
2023-09-25T14:30:40+02:00 INFO Settings summary:
β”œβ”€β”€ VPN settings:
|   β”œβ”€β”€ VPN provider settings:
|   |   β”œβ”€β”€ Name: protonvpn
|   |   β”œβ”€β”€ Server selection settings:
|   |   |   β”œβ”€β”€ VPN type: openvpn
|   |   |   β”œβ”€β”€ Countries: germany
|   |   |   β”œβ”€β”€ Cities: frankfurt
|   |   |   └── OpenVPN server selection settings:
|   |   |       └── Protocol: TCP
|   |   └── Automatic port forwarding settings:
|   |       β”œβ”€β”€ Use port forwarding code for current provider
|   |       └── Forwarded port file path: /tmp/gluetun/forwarded_port
|   └── OpenVPN settings:
|       β”œβ”€β”€ OpenVPN version: 2.5
|       β”œβ”€β”€ User: [set]
|       β”œβ”€β”€ Password: s5...KML
|       β”œβ”€β”€ Network interface: tun0
|       β”œβ”€β”€ Run OpenVPN as: root
|       └── Verbosity level: 1
β”œβ”€β”€ DNS settings:
|   β”œβ”€β”€ Keep existing nameserver(s): no
|   β”œβ”€β”€ DNS server address to use: 127.0.0.1
|   └── DNS over TLS settings:
|       └── Enabled: no
β”œβ”€β”€ Firewall settings:
|   └── Enabled: no
β”œβ”€β”€ Log settings:
|   └── Log level: INFO
β”œβ”€β”€ Health settings:
|   β”œβ”€β”€ Server listening address: 127.0.0.1:9999
|   β”œβ”€β”€ Target address: cloudflare.com:443
|   β”œβ”€β”€ Duration to wait after success: 5s
|   β”œβ”€β”€ Read header timeout: 100ms
|   β”œβ”€β”€ Read timeout: 500ms
|   └── VPN wait durations:
|       β”œβ”€β”€ Initial duration: 6s
|       └── Additional duration: 5s
β”œβ”€β”€ Shadowsocks server settings:
|   └── Enabled: no
β”œβ”€β”€ HTTP proxy settings:
|   └── Enabled: no
β”œβ”€β”€ Control server settings:
|   β”œβ”€β”€ Listening address: :8000
|   └── Logging: yes
β”œβ”€β”€ OS Alpine settings:
|   β”œβ”€β”€ Process UID: 1000
|   β”œβ”€β”€ Process GID: 1000
|   └── Timezone: europe/berlin
β”œβ”€β”€ Public IP settings:
|   β”œβ”€β”€ Fetching: every 12h0m0s
|   └── IP file path: /tmp/gluetun/ip
└── Version settings:
    └── Enabled: yes
2023-09-25T14:30:40+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:40+02:00 INFO [routing] adding route for 0.0.0.0/0
2023-09-25T14:30:40+02:00 INFO [firewall] firewall disabled, only updating allowed subnets internal list
2023-09-25T14:30:40+02:00 INFO [routing] default route found: interface eth0, gateway 172.20.0.1, assigned IP 172.20.0.4 and family v4
2023-09-25T14:30:40+02:00 INFO TUN device is not available: open /dev/net/tun: no such file or directory; creating it...
2023-09-25T14:30:40+02:00 INFO [dns] using plaintext DNS at address 1.1.1.1
2023-09-25T14:30:40+02:00 INFO [http server] http server listening on [::]:8000
2023-09-25T14:30:40+02:00 INFO [healthcheck] listening on 127.0.0.1:9999
2023-09-25T14:30:40+02:00 INFO [firewall] firewall disabled, only updating internal VPN connection
2023-09-25T14:30:40+02:00 INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov  2 2022
2023-09-25T14:30:40+02:00 INFO [openvpn] library versions: OpenSSL 3.1.3 19 Sep 2023, LZO 2.10
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 INFO [openvpn] Attempting to establish TCP connection with [AF_INET]194.126.177.14:443 [nonblock]
2023-09-25T14:30:40+02:00 INFO [healthcheck] healthy!
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP connection established with [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP_CLIENT link local: (not bound)
2023-09-25T14:30:40+02:00 INFO [openvpn] TCP_CLIENT link remote: [AF_INET]194.126.177.14:443
2023-09-25T14:30:40+02:00 WARN [openvpn] 'link-mtu' is used inconsistently, local='link-mtu 1635', remote='link-mtu 1636'
2023-09-25T14:30:40+02:00 WARN [openvpn] 'comp-lzo' is present in remote config but missing in local config, remote='comp-lzo'
2023-09-25T14:30:40+02:00 INFO [openvpn] [node-de-17.protonvpn.net] Peer Connection Initiated with [AF_INET]194.126.177.14:443
2023-09-25T14:30:41+02:00 INFO [openvpn] TUN/TAP device tun0 opened
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip link set dev tun0 up
2023-09-25T14:30:41+02:00 INFO [openvpn] /sbin/ip addr add dev tun0 10.81.0.7/16
2023-09-25T14:30:41+02:00 INFO [openvpn] UID set to nonrootuser
2023-09-25T14:30:41+02:00 INFO [openvpn] Initialization Sequence Completed
2023-09-25T14:30:41+02:00 INFO [firewall] firewall disabled, only updating allowed ports internal state
2023-09-25T14:30:41+02:00 INFO [vpn] You are running 6 commits behind the most recent latest
2023-09-25T14:30:41+02:00 INFO [port forwarding] starting
2023-09-25T14:30:41+02:00 INFO [port forwarding] gateway external IPv4 address is 194.126.177.84
2023-09-25T14:30:41+02:00 INFO [port forwarding] port forwarded is 36736
2023-09-25T14:30:41+02:00 INFO [firewall] firewall disabled, only updating allowed ports internal state
2023-09-25T14:30:41+02:00 INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port
2023-09-25T14:30:41+02:00 INFO [ip getter] Public IP address is 194.126.177.84 (Germany, Hesse, Frankfurt am Main)

Share your configuration

gluetun:
    image: qmcgaw/gluetun:${GLUETUN_VERSION}
    container_name: gluetun
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - VPN_SERVICE_PROVIDER=protonvpn
      - OPENVPN_USER=myuser+pmp
      - OPENVPN_PASSWORD=mypassword
      - FIREWALL_VPN_INPUT_PORTS=32400
      - VPN_PORT_FORWARDING=ON
      - SERVER_COUNTRIES=GERMANY
      - FIREWALL=OFF
      - DOT=OFF
      - OPENVPN_PROTOCOL=TCP
      - SERVER_CITIES=FRANKFURT
      - TZ=${TIMEZONE}
    ports:
      - 32400:32400
clemone210 commented 11 months ago

it seems that the port is not consistent on ProtonVPN when not in use. Do we have any information about how long the port is mapped and published with ProtonVPN?

qdm12 commented 11 months ago

Technically speaking, they use the natpmp protocol, and Gluetun requests a port for a 60 seconds lifetime, and then, every 45 seconds it will re-request it for a 60 seconds lifetime. Basically 15 seconds before it expires, it re-requests it to maintain it.

Does it behave the same with image :v3.35.0? πŸ€”

akutruff commented 11 months ago

Yeah, this sounds like expected behavior. It's annoying but that's how proton vpn works. You need to automate some other script or program to update the port forward settings.

qdm12 commented 11 months ago

@akutruff so eventhough gluetun does request correctly on time, and their gateway answers correctly, the port forwarded gets disconnected silently after a few minutes!? I guess I could add something to try to reach publicip:forwardedport every N seconds to check the forwarded port works as an option, but ideally not since my time resources are a bit limited πŸ˜„

akutruff commented 11 months ago

@qdm12 It sounds like you're already doing the right thing. You just need to continually poll them for a port with natpmpc as far as I understand. I don't think you'd need to do any more of a check than that.

However, I just checked the container I had setup to test your port forwarding PR and the port is no longer open. : ( I don't see any port forwarding messages in the log after the reconnect.

gluetun                   | 2023-09-25T19:54:51Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:54:56Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52716 in 12.27Β΅s
gluetun                   | 2023-09-25T19:54:59Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun                   | 2023-09-25T19:54:59Z INFO [vpn] stopping
gluetun                   | 2023-09-25T19:54:59Z INFO [vpn] starting
gluetun                   | 2023-09-25T19:54:59Z INFO [firewall] allowing VPN connection...
gluetun                   | 2023-09-25T19:54:59Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun                   | 2023-09-25T19:55:00Z INFO [wireguard] Connecting to ***
gluetun                   | 2023-09-25T19:55:00Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun                   | 2023-09-25T19:55:00Z INFO [vpn] VPN gateway IP address: 10.2.0.1
gluetun                   | 2023-09-25T19:55:00Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T19:55:01Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52726 in 15.03Β΅s
gluetun                   | 2023-09-25T19:55:06Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52734 in 12.66Β΅s
gluetun                   | 2023-09-25T19:55:08Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:55:09Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T19:55:10Z INFO [ip getter] Public IP address is *** (United States, New York, New York City)
gluetun                   | 2023-09-25T19:55:12Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52746 in 16.72Β΅s
gluetun                   | 2023-09-25T19:55:17Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52756 in 12.48Β΅s
gluetun                   | 2023-09-25T19:55:17Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T19:55:22Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:52762 in 12.48Β΅s
gluetun                   | 2023-09-25T19:55:23Z INFO [healthcheck] healthy!
akutruff commented 11 months ago

Are you restarting the port forward process after the VPN restarts?

akutruff commented 11 months ago

@qdm12 I just verified that the port is now being reported as 0 when there's a healthcheck fail. The lines with port-mapper in the logs below show the output of the control server for the port.

gluetun                   | 2023-09-25T20:24:31Z INFO [http server] 200 GET /portforwarded wrote 15B to 127.0.0.1:56848 in 20.17Β΅s
port-mapper           | 53986
gluetun                   | 2023-09-25T20:24:51Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun                   | 2023-09-25T20:24:59Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun                   | 2023-09-25T20:24:59Z INFO [vpn] stopping
gluetun                   | 2023-09-25T20:24:59Z INFO [port forwarding] stopping
gluetun                   | 2023-09-25T20:24:59Z INFO [firewall] removing allowed port 53986...
gluetun                   | 2023-09-25T20:24:59Z INFO [vpn] starting
gluetun                   | 2023-09-25T20:25:00Z INFO [port forwarding] removing port file /tmp/gluetun/forwarded_port
gluetun                   | 2023-09-25T20:25:00Z INFO [firewall] allowing VPN connection...
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Connecting to ***
gluetun                   | 2023-09-25T20:25:00Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun                   | 2023-09-25T20:25:00Z INFO [vpn] VPN gateway IP address: 10.2.0.1
gluetun                   | 2023-09-25T20:25:04Z INFO [healthcheck] healthy!
gluetun                   | 2023-09-25T20:25:05Z INFO [ip getter] Public IP address is ***
gluetun                   | 2023-09-25T20:25:32Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:56896 in 15.89Β΅s
port-mapper           | 0
clemone210 commented 11 months ago

with the version built on 2023-09-24T16:54:36.207Z (commit 9b00763) there seems to be a change. A few commits before, the connection got lost and the port was not reachable after 1 minute. Now the connection will somehow update, but the actual container will still loose its connection.

This is my Plex containers logs when I do a fresh restart:

Sep 26, 2023 10:03:44.948 [140519956605584] DEBUG - PublicAddressManager: Starting.
Sep 26, 2023 10:03:44.948 [140519956605584] DEBUG - PublicAddressManager: Obtaining public address and mapping port.
Sep 26, 2023 10:03:44.949 [140519956605584] DEBUG - NetworkInterface: Starting watch thread.
Sep 26, 2023 10:03:44.949 [140519895755576] DEBUG - PublicAddressManager: Obtaining public IP.
Sep 26, 2023 10:03:44.949 [140519895755576] DEBUG - [HCl#d] HTTP requesting GET https://v4.plex.tv/pms/:/ip
Sep 26, 2023 10:03:44.949 [140519889427256] DEBUG - NAT: UPnP, attempting port mapping.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkInterface: Notified of network changed (force=0)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Detected primary interface: 10.80.0.2
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Network interfaces:
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG -  * 1 lo (127.0.0.1) (00-00-00-00-00-00) (loopback: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG -  * 365 eth0 (172.22.0.4) (02-42-AC-16-00-04) (loopback: 0)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - Creating NetworkServices singleton.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkServices: Initializing...
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Creating new service.
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Got notification of changed network (first change: 1)
Sep 26, 2023 10:03:44.966 [140519956605584] DEBUG - NetworkService: Quick dispatch of network change.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32414
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32410
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - Network change for advertiser.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32412
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=0), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:32413
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 127.0.0.1 on broadcast address 127.255.255.255 (index: 0)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 172.22.255.255 (index: 1)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=1), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 127.0.0.1 on broadcast address 127.255.255.255 (index: 0)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 172.22.255.255 (index: 1)
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Network change for browser (polled=0), closing 0 browse sockets.
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Setting up multicast listener on 0.0.0.0:1901
Sep 26, 2023 10:03:44.967 [140519864240952] DEBUG - NetworkService: Browsing on interface 172.22.0.4 on broadcast address 239.255.255.250 (index: 0)

Here is some log when I noticed the connection was not possible anymore:

Sep 26, 2023 10:19:41.001 [140519832599352] DEBUG - MyPlex: sendMapping resetting state - previous mapping state: 'Mapped'.
Sep 26, 2023 10:19:41.001 [140519832599352] DEBUG - MyPlex: mapping state set to 'Unknown'.
Sep 26, 2023 10:19:41.002 [140519855803192] DEBUG - Push: Processing new content in section 2 for 18 users.
Sep 26, 2023 10:19:41.005 [140519832599352] DEBUG - MyPlex: Sending Server Info to myPlex (user=XXXXXXXX, ip=194.126.177.37, port=50842)
Sep 26, 2023 10:19:41.005 [140519832599352] DEBUG - [HCl#52] HTTP requesting POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.262 [140519913012024] DEBUG - [HttpClient/HCl#52] HTTP/2.0 (0.3s) 201 response from POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Published Mapping State response was 201
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Got response for d9ec52012XXXXc107851d56XXX45acXXX124033 ~ registered 194.126.177.37:50842
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: updating mapped state - current state: 'Mapped'
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: mapping state set to 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: async reachability check - current mapped state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.262 [140519832599352] DEBUG - MyPlex: Requesting reachability check.
Sep 26, 2023 10:19:41.263 [140519832599352] DEBUG - [HCl#53] HTTP requesting PUT https://plex.tv/api/servers/d9ec5XXXXXXXXXX1d56e2e645acd6e124033/connectivity?X-Plex-Token=xxxxxxxxxxxxxxxxxxxx&asyncIdentifier=9d83ceb4-6XXX-4f31-aXXXc-36de736b3952
Sep 26, 2023 10:19:41.383 [140519913012024] DEBUG - [HttpClient/HCl#53] HTTP/2.0 (0.1s) 200 response from PUT https://plex.tv/api/servers/d9ec5XXXX062dXXXXX51dXXX2e64XXXXXe124033/connectivity?X-Plex-Token=xxxxxxxxxxxxxxxxxxxx&asyncIdentifier=9d83ceb4-XXXX-XXXX-XXXX-36de736b3952 (reused)
Sep 26, 2023 10:19:41.383 [140519830489912] DEBUG - MyPlex: sendMapping resetting state - previous mapping state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.383 [140519830489912] DEBUG - MyPlex: mapping state set to 'Unknown'.
Sep 26, 2023 10:19:41.385 [140519830489912] DEBUG - MyPlex: Sending Server Info to myPlex (user=XXXXXX, ip=194.126.177.37, port=50842)
Sep 26, 2023 10:19:41.385 [140519830489912] DEBUG - [HCl#54] HTTP requesting POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx
Sep 26, 2023 10:19:41.559 [140519913012024] DEBUG - [HttpClient/HCl#54] HTTP/2.0 (0.2s) 201 response from POST https://plex.tv/servers.xml?auth_token=xxxxxxxxxxxxxxxxxxxx (reused)
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: Published Mapping State response was 201
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: Got response for d9ec5XXXXXXdc1078XXXXXXXXXXXXXXXX33 ~ registered 194.126.177.37:50842
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: updating mapped state - current state: 'Mapped - Publishing'
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: mapping state set to 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: async reachability check - current mapped state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:41.559 [140519830489912] DEBUG - MyPlex: we already have requested a connectivity refresh for async identifier 9d83ceb4XXXXXXXXXXXXXXXXX36b3952 which has not yet expired.
Sep 26, 2023 10:19:46.375 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] EventSource: Got event [data] '<Message address="194.126.177.37" port="50842" asyncIdentifier="9XXXXXXXXXXXXXXXXXXXX952" connectivity="0" command="notifyConnectivity"/>'
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] PubSub: Got notified of reachability for async identifier 9d83ceb4-643c-4f31-af5c-36de736b3952: 0 for 194.126.177.37:50842 (responded in 4992 ms)
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] MyPlex: reachability check - current mapping state: 'Mapped - Publishing'.
Sep 26, 2023 10:19:46.376 [140519947148088] DEBUG - [EventSourceClient/pubsub/172.105.245.168:443] MyPlex: mapping state set to 'Mapped - Not Published (Not Reachable)'.`

Within gluetun there is no log past the initial start.

qdm12 commented 11 months ago

@clemone210 Please pull the latest image and run it with LOG_LEVEL=debug, I've added debug logs in the 'keep port forward' part in commit 53cbd839a6a532190e96a310a3bf48e472ea61b5 (built today 2023-09-26), and see what the logs say?

but the actual container will still loose its connection.

You are talking about internet --> forwarded port through Gluetun --> Plex container correct?

If so, did you have any vpn internal restarts (due to unhealthy) maybe?

@akutruff as @clemone210 mentioned, what you experience is likely the bug in Gluetun that was fixed only 3 days ago, are you sure you are running the latest image? I also answered on the closed issue. If you are running an image built on or after 2023-09-24 and still experience the problem, let me know!

akutruff commented 11 months ago

@qdm12 I pulled the latest tagged image just now and will try. I also see you have an image tagged pr-1742 In general, will the latest tag have any of these PR's in them? Thanks.

Friday13th87 commented 11 months ago

i pulled the pr-1742 and it is a very old build. it says while starting its over 90 days old.

i was answering in the already closed issue. i am not using ProtonVPN but PureVPN with "FIREWALL_VPN_INPUT_PORTS" and have the same issue. after a while i end up in a unhealthy/healthy loop, connection is stable and working but port forwarding is lost.

i re-pulled the latest image right now but it is still Running version latest built on 2023-09-24T16:54:36.207Z (commit 9b00763)

clemone210 commented 11 months ago

@qdm12 when I pull the latest docker image with the tag :latest, I am still 1 commit behind according to the log.

akutruff commented 11 months ago

@qdm12 For the latest tagged image I still see the behavior. But I don't think your debug statement is in this image.

gluetun | Running version latest built on 2023-09-24T16:54:36.207Z (commit 9b00763)

The port forwarding does not happen again, and the control server still returns 0.

gluetun  | 2023-09-26T14:27:55Z INFO [healthcheck] unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
gluetun  | 2023-09-26T14:28:03Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun  | 2023-09-26T14:28:03Z INFO [vpn] stopping
gluetun  | 2023-09-26T14:28:03Z INFO [port forwarding] stopping
gluetun  | 2023-09-26T14:28:03Z INFO [firewall] removing allowed port 65103...
gluetun  | 2023-09-26T14:28:03Z INFO [port forwarding] removing port file /tmp/gluetun/forwarded_port
gluetun  | 2023-09-26T14:28:03Z INFO [vpn] starting
gluetun  | 2023-09-26T14:28:03Z INFO [firewall] allowing VPN connection...
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Connecting to ***
gluetun  | 2023-09-26T14:28:03Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun  | 2023-09-26T14:28:13Z ERROR [ip getter] Get "https://ipinfo.io/": dial tcp: lookup ipinfo.io on 10.2.0.1:53: read udp 10.2.0.2:58403->10.2.0.1:53: i/o timeout - retrying in 5s
gluetun  | 2023-09-26T14:28:14Z INFO [healthcheck] program has been unhealthy for 11s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
gluetun  | 2023-09-26T14:28:14Z INFO [vpn] stopping
gluetun  | 2023-09-26T14:28:14Z INFO [vpn] starting
gluetun  | 2023-09-26T14:28:14Z INFO [firewall] allowing VPN connection...
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Using userspace implementation since Kernel support does not exist
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Connecting to ***
gluetun  | 2023-09-26T14:28:14Z INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
gluetun  | 2023-09-26T14:28:18Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36074 in 12.86Β΅s
gluetun  | 2023-09-26T14:28:23Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36082 in 12.431Β΅s
gluetun  | 2023-09-26T14:28:28Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36090 in 13.53Β΅s
gluetun  | 2023-09-26T14:28:28Z ERROR [ip getter] Get "https://ipinfo.io/": dial tcp: lookup ipinfo.io on 10.2.0.1:53: read udp 10.2.0.2:34539->10.2.0.1:53: i/o timeout - retrying in 10s
gluetun  | 2023-09-26T14:28:29Z INFO [healthcheck] healthy!
gluetun  | 2023-09-26T14:28:33Z INFO [http server] 200 GET /portforwarded wrote 11B to 127.0.0.1:36100 in 14.87Β΅s
clemone210 commented 11 months ago

but the actual container will still loose its connection.

You are talking about internet --> forwarded port through Gluetun --> Plex container correct?

If so, did you have any vpn internal restarts (due to unhealthy) maybe?

So for example in gluetun the forwarded port is 37706 while in Plex the auto-allocated port is 56146 I am honestly not sure, why the ports are different, but locally Plex always listens on 32400 which also my Cloudflare tunnel is connecting to. At the moment also the port remains the same within Plex container but as Plex tries to check if the port mapping is healthy, it will fail at some point. After the failure it will work again with the same port for a short time, but it loops in working - not working - working - not working in Plex.

Maybe the DEBUG function will bring some light into the dark.

Stetsed commented 11 months ago

I am currently experiencing the same issue, have not found a workaround including stopping and starting the VPN via the API. Only thing that fixes it is stopping the container and restarting it. Does seem to be AFTER a healthcheck fails and it restarts the VPN so as long as there is 0 interruption in the connection it does work.

qdm12 commented 11 months ago

My apologies everyone for:

qdm12 commented 11 months ago

d4df87286e1e14c5471d09800ad8408285b44e58 should finally fix it for good. Previously I only tested when it would be unhealthy from the start (never port forwarded), now I tested it does re-trigger port forwarding after a successful port forward -> unhealthy vpn restart (by disconnecting my ethernet cable lol I didn't find a fancier way to do it).

Let me know if this is fixed please πŸ™ Thanks!!!!

clemone210 commented 11 months ago

so for the gluetun image it seems to be okay. The debug shows that the port is maintained, it also shows the same port is maintained.

My problem still exists. I am not sure what its causing it. Furthermore the forwarded port within gluetun never matches with the one which is exposed in the Plex container (automatically).

ZekuX commented 11 months ago

Thank you for your hard work. I can confirm with the latest version the problem sadly still exists, after a while the container isn't responsive any more and only a restart fixes it.

fizzxed commented 11 months ago

so for the gluetun image it seems to be okay. The debug shows that the port is maintained, it also shows the same port is maintained.

My problem still exists. I am not sure what its causing it. Furthermore the forwarded port within gluetun never matches with the one which is exposed in the Plex container (automatically).

I don't think gluetun supports UPnP and as such you will have to manually specify the port to forward in Plex with the one gluetun gets from protonVPN. Maybe you can forward 32400 through gluetun for LAN access and hope and pray the WAN port you manually set never changes. See this. I don't think they allow setting the WAN port thru their web API but maybe its undocumented somewhere.

Edit: Perhaps it is possible to update the Plex public/wan port through the web api since this python api apparently can do it, but I admit I spent all of 2 minutes looking at it and have not verified. You could then add a cron job that would periodically update the web port with the one gluetun reports, or maybe if gluetun someday supported web hooks we could spin up something to do it on port change?

gmillerd commented 11 months ago

Make sure that the ip address you are using from proton (which definitely is not dedicated) doesn't already have someone using this port, otherwise server hop to a new one and try again. Even if you yourself connect, port forward, disconnect vpn and try again, that port will still be bound for a considerable amount of time and you will not be able to rebind to it ... as proton's endpoint still has it in use.

Friday13th87 commented 10 months ago

For me it was similar. after 4 days (with the current version) port forwarding stopped working. i am using a cron script which checks if port forwarding is still working and if not it restarts the container. so i dont care anymore, but the script was running last night, so port forwarding was actually not working anymore.

Before the last update it was at least once a day stopping to work, so it got much better, but not totally solved.

CplPwnies commented 10 months ago

For me it was similar. after 4 days (with the current version) port forwarding stopped working. i am using a cron script which checks if port forwarding is still working and if not it restarts the container. so i dont care anymore, but the script was running last night, so port forwarding was actually not working anymore.

Before the last update it was at least once a day stopping to work, so it got much better, but not totally solved.

I'm glad to know it's not just me. Since this seems like a slightly different issue than what is being discussed in this thread (though, very adjacent), I deleted my original comment and opened up issue #1891

AlbyGNinja commented 10 months ago

I wanna add a problem to this. Whenever it happens, the service is restarted, this cause the containers passing through Gluetun to stop working properly unless a docker restart xyz is submitted, just like this issue: https://github.com/qdm12/gluetun/issues/405

N47H4N commented 10 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ?

Any idea please ?

AlbyGNinja commented 10 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ?

Any idea please ?

I’ve made a python script to keep up to date my qbittorrent’s ports, you can check it out in case to take some hint

SnoringDragon commented 10 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ?

Any idea please ?

I actually have a container I built to solve specifically this problem which I posted under another issue. I hope this helps, and I plenty to update it with the listed suggestions when I get a change, but I have been busy as I am a student.

AlbyGNinja commented 10 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ? Any idea please ?

I actually have a container I built to solve specifically this problem which I posted under another issue. I hope this helps, and I plenty to update it with the listed suggestions when I get a change, but I have been busy as I am a student.

Oh sh*t I was looking for something like that! Damn, I would have find it a week ago 😒

CplPwnies commented 10 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ? Any idea please ?

I actually have a container I built to solve specifically this problem which I posted under another issue. I hope this helps, and I plenty to update it with the listed suggestions when I get a change, but I have been busy as I am a student.

Would you mind sharing your qbittorrent connection config? Everytime I manually set the port in my qbittorrent config to the port supplied by gluetun, it shows Disconnected

FrenchGithubUser commented 10 months ago

There is also this container to update qbittorrent's forwarded port for anyone interested

N47H4N commented 10 months ago

My problem here is, I can't change my application port. That's why I'm wondering if we can map/NAT the forwarded port of the VPN to my internal fixed port

Stetsed commented 10 months ago

I have been running the latest version of gluetun for the past 2 days and it has functioned fine, even when it loses connection and reconnects it succesfuly gets a port again and then that streams down to qbittorent using an update script.

qdm12 commented 10 months ago

Hello everyone, sorry for the delay answering;

  1. It seems the port forwarding code triggers properly after an unhealthy event πŸŽ‰
  2. When the port forwarded no longer works, do you see any warning or error in your logs??
  3. I just noticed on Protonvpn's page that they changed natpmpc -a 0 0 to natpmpc -a 1 0, this is now changed in the code as well with commit ee413f59a2d7ce9dfb381d0c392ba89d5710a3e1 maybe this helps? πŸ€”
  4. I implemented the nat-pmp protocol myself, maybe I did miss a detail that could cause it to no longer work after hours/days, although I would expect the vpn gateway to complain about it. Anyway, has anyone tried using natpmpc? You can do:
docker run -it --rm --network="container:gluetun" ubuntu 
apt-get update -y
apt-get install -y natpmpc
# 10.2.0.1 is your VPN gateway, you can find it with ip route
natpmpc -g 10.2.0.1
natpmpc -a 1 0 udp 60 -g 10.2.0.1
natpmpc -a 1 0 tcp 60 -g 10.2.0.1
while true ; do date ; natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo -e "ERROR with natpmpc command \a" ; break ; } ; sleep 45 ; done

(instructions largely copied from protonvpn's page)

akutruff commented 10 months ago

@qdm12

I started testing natpmpc but stopped after you said you had fixes. I'll continue the work today. However, does the routing / firewall code need to be aware of the port forward at all?

akutruff commented 10 months ago

@qdm12

Yeah, just saw in the logs that you are doing something with the firewall. Is it pointless for me to try to set this up in a separate docker container?

vpn-gluetun-1  | 2023-10-06T17:27:24Z INFO [firewall] setting allowed input port 38229 through interface tun0...
vpn-gluetun-1  | 2023-10-06T17:27:24Z INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port
qdm12 commented 10 months ago

@akutruff indeed sorry I completely forgot. You can do

docker exec gluetun iptables --append INPUT -i tun0 -p tcp --dport 38229 -j ACCEPT
syss commented 10 months ago

I found that forwarded port gets defunct after a while. After starting i get full up/down speeds. while it works there are small interrupts of the upload of ~2 seconds and goes back to full speed. this works for lets say 30 minutes and then the upload speed plummets. Sometimes it helped to restart the container, sometimes it helped to change the server. But it stopped working overall after a time even though i get port Ok messages from natpmpc. had this issue with rtorrent in container and wireguard on host and now with qbittorrent and gluetun both in containers.

At the not ok state i see dht has 0 hosts, kali linux torrents download and upload, but raspberry pi torrents dont start. Also other magnet links do not start downloading metadata in failing state.

to me protonvpn with wireguard and portforwarding is just not working. I have a strong suspicion that the flaw is on their side.

I try the openvpn option or get my money back, because it just doesnt work for me.

A pitty that mullvad closed their ports (which i used before)

qdm12 commented 10 months ago

@syss Thanks for clarifying! Let me know how it goes with OpenVPN, and others feel free to chime in with what you find out. I'll keep this issue opened, but won't mark it as urgent/blocker for next release anymore.

syss commented 10 months ago

The ovpn connection seems to stay intact. However the down/upload speeds are very flakey. Going from 100Mbit down to some Kbit and up again. giving up on it, for me the provider does not deliver. I am getting my money back

Edit: Port forwarding works on ovpn and stays open. But with said quality not useable for me

edit2: after a while lots of peers, no upload with ovpn

syss commented 10 months ago

After tweaking a lot of settings I can now finally say, that the ProtonVPN is working nicely with wireguard.

qbittorrent: First the biggest issue I had was that in qbittorrent the option Enable local peer discovery was enabled and caused lots and lots of network issues. After disabling things worked fine for me. Additionally it was needed to reduce the connections made. I have a 100/20 Mbit connection and use the following settings:

VPN settings:

VPN_SERVICE_PROVIDER=custom
VPN_TYPE=wireguard
VPN_PORT_FORWARDING=on
VPN_PORT_FORWARDING_PROVIDER=protonvpn
VPN_ENDPOINT_IP=<your ip here>
VPN_ENDPOINT_PORT=51820
WIREGUARD_PRIVATE_KEY=<your priv key here>
WIREGUARD_PUBLIC_KEY=<your pub key here>
WIREGUARD_ADDRESSES=10.2.0.2/32
VPN_DNS_ADDRESS=10.2.0.1

I was missing the **VPNPORT*** options before.

When it comes to portforwarding and updating the port, each program has its own method.

#!/bin/bash

GLUETUN_URL=http://127.0.0.1:8000
QBITTORRENT_URL=https://myurl/qbittorrent

#get the port from gluetun control server and modify it a bit
json="$(curl -L "${GLUETUN_URL}/v1/openvpn/portforwarded" 2>/dev/null | sed 's/port/listen_port/g')"
#set the port in qbittorrent
curl -i -X POST -d "json=${json}" "${QBITTORRENT_URL}/api/v2/app/setPreferences"

but I use this container here: https://hub.docker.com/r/technosam/qbittorrent-gluetun-port-update

so what you could do to forward the exposed port from ProtonVPN is to somehow tell your firewall/router to do a port trigger from protonvpn port to your plex port.

alcroito commented 10 months ago

so what you could do to forward the exposed port from ProtonVPN is to somehow tell your firewall/router to do a port trigger from protonvpn port to your plex port.

I feel like this is something gluetun should be able to do automatically when using protonvpn or any other vpn that returns a dynamic port, by forwarding it it to some static port that the user provides as configuration.

Basically establish the vpn connection, extract the dynamic port from "${GLUETUN_URL}/v1/openvpn/portforwarded", and then use something like socat to forward to the given static port.

qdm12 commented 9 months ago

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 61229118653185e6d94a7d2ca6d3aafff9c92bdf let me know if it works πŸ˜‰ (it uses that iptables prerouting redirect instruction(s)).

alcroito commented 9 months ago

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works πŸ˜‰ (it uses that iptables prerouting redirect instruction(s)).

Thanks a lot! I hope it works. Unfortunately i can't test it yet, because the docker image has not been updated yet.

KptCheeseWhiz commented 9 months ago

This might be unrelated, but whenever the ProtonVPN using wireguard connection restarts, the forwarded port dies and you need to re-listen that on port, or maybe this is just an issue with deluge. Here's a script I am using to fix the issue using inotifyd, it is also updating the forwarded port if it changes for some reason (it should be straightforward to modify it to work for qbittorrent) :

#!/bin/sh

FORWARDED_PORT_FILE=/gluetun/forwarded_port

while [ ! -f "$FORWARDED_PORT_FILE" ] || [ -z "$(cat "$FORWARDED_PORT_FILE")" ]; do
  echo "info: waiting for forwarded port file.."
  sleep 5
done

{
  FORWARDED_PORT=$(cat "$FORWARDED_PORT_FILE")
  echo "info: forwarded port is $FORWARDED_PORT"

  while ! nc -z 0.0.0.0 8112 &>/dev/null; do
    echo "info: waiting for deluge to wake up.."
    sleep 5
  done

  deluge-console -c /config "config -s listen_ports [$FORWARDED_PORT,$FORWARDED_PORT]"

  echo "info: watching if the forwarded port has been changed.."
  while :; do
    while read EVENT FILE; do
      if [ "$EVENT" == "x" ]; then
        while [ ! -f "$FORWARDED_PORT_FILE" ] || [ -z "$(cat "$FORWARDED_PORT_FILE")" ]; do
          echo "info: waiting for forwarded port file to be recreated.."
          sleep 5
        done
      fi

      NEW_PORT=$(cat "$FILE")
      if [ "$NEW_PORT" -ne "$FORWARDED_PORT" ]; then
        echo "info: forwarded port has been changed to $NEW_PORT (was $FORWARDED_PORT)"
        FORWARDED_PORT=$NEW_PORT
      else
        echo "info: forwarded port unchanged (is $FORWARDED_PORT)"
        # We need to reset the port since it might be dead and deluge is not aware
        deluge-console -c /config "config -s listen_ports [$((FORWARDED_PORT+1)),$((FORWARDED_PORT+1))]"
        sleep 1
      fi
      deluge-console -c /config "config -s listen_ports [$FORWARDED_PORT,$FORWARDED_PORT]"
    done < <(inotifyd - "$FORWARDED_PORT_FILE:wx")
  done
} &
JeremyGuinn commented 9 months ago

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works πŸ˜‰ (it uses that iptables prerouting redirect instruction(s)).

@qdm12, I built the Dockerfile from commit [6122911]. The container starts and successfully connects using the basic config without any port forwarding, but as soon as VPN_PORT_FORWARDING=on is set, I get the following crash:

$ docker build -t gmcgaw/gluetun .
$ docker run -it --rm --cap-add=NET_ADMIN \
  -e VPN_SERVICE_PROVIDER=protonvpn \
  -e VPN_TYPE=openvpn -e VPN_PORT_FORWARDING=on \
  -e OPENVPN_USER=test -e OPENVPN_PASSWORD=test \
  -p 8000:8000/tcp \
  qmcgaw/gluetun

gluetun  | panic: runtime error: invalid memory address or nil pointer dereference
gluetun  | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x75b3a6]
gluetun  |
gluetun  | goroutine 6 [running]:
gluetun  | github.com/qdm12/gotree.(*Node).Appendf(...)
gluetun  |      github.com/qdm12/gotree@v0.2.0/node.go:37
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.PortForwarding.toLinesNode({0xc00029c7d3?, 0xc0001eaf30?, 0xc0001eae70?, 0xc00029c7d4?})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/portforward.go:109 +0x146
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Provider.toLinesNode({0xc0001eae40, {{0xc000012039, 0x7}, {{0x0, 0xffff00000000}, 0xc00012a000}, {0xc0001eae50, 0x1, 0x1}, {0x0, ...}, ...}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/provider.go:94 +0x2ca
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.VPN.toLinesNode({{0xc000012039, 0x7}, {0xc0001eae40, {{0xc000012039, 0x7}, {{...}, 0xc00012a000}, {0xc0001eae50, 0x1, 0x1}, ...}, ...}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/vpn.go:87 +0xb8
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Settings.toLinesNode({{0xc0001eae00, 0xc00029c738}, {{{0x0, 0xffff7f000001}, 0xc00012a000}, 0xc00029c739, {0xc00029c73a, 0xc00029c770, {{...}, 0xc00029c778, ...}, ...}}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/settings.go:147 +0xb8
gluetun  | github.com/qdm12/gluetun/internal/configuration/settings.Settings.String({{0xc0001eae00, 0xc00029c738}, {{{0x0, 0xffff7f000001}, 0xc00012a000}, 0xc00029c739, {0xc00029c73a, 0xc00029c770, {{...}, 0xc00029c778, ...}, ...}}, ...})
gluetun  |      github.com/qdm12/gluetun/internal/configuration/settings/settings.go:141 +0x31
gluetun  | main._main({0x108da80, 0xc000111540}, {{0x1086f58, 0x7}, {0x1086f60, 0x7}, {0x10885f0, 0xf}}, {0xc000114050, 0x1, ...}, ...)
gluetun  |      ./main.go:278 +0x16b0
gluetun  | main.main.func1()
gluetun  |      ./main.go:92 +0x12c
gluetun  | created by main.main in goroutine 1
gluetun  |      ./main.go:91 +0x5e5

Looks like node is being defined after the new log you've added https://github.com/qdm12/gluetun/commit/61229118653185e6d94a7d2ca6d3aafff9c92bdf#diff-6a711fc9088a325002bd9769a59d04cd3dfb31e7c658f5e51b596f6cf9ea0168R109-L97

After a little switcheroo, I've got it running, but then fail when trying to create the NAT redirect, is it supposed to be -i tun0?

ERROR [vpn] redirecting port in firewall: 
  redirecting port: redirecting IPv4 source port 46742 to destination port 55660 on interface tun0: 
  command failed: "iptables -t nat --append PREROUTING -o tun0 -d 127.0.0.1 -p tcp --dport 46742 -j REDIRECT --to-ports 55660":
    iptables v1.8.9 (legacy): Can't use -o with PREROUTING
qdm12 commented 9 months ago

@alcroito my bad, the automated build failed because of a linter error;

@KptCheeseWhiz indeed, what a disastrous commit πŸ˜„ I repushed the commit as 4105f74ce19faab30ce0c7758745b7d0751ad08e it should fix both issues you successfully spotted! πŸ˜‰

Michsior14 commented 9 months ago

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works πŸ˜‰ (it uses that iptables prerouting redirect instruction(s)).

For me this didn't work in transmission (commit https://github.com/qdm12/gluetun/commit/4105f74ce19faab30ce0c7758745b7d0751ad08e, port was marked as closed). I've created docker mod instead for linuxserver container. If someone is interested it can be found here.

SnoringDragon commented 9 months ago

@alcroito I just implemented it with VPN_PORT_FORWARDING_LISTENING_PORT from commit 6122911 let me know if it works πŸ˜‰ (it uses that iptables prerouting redirect instruction(s)).

For me this didn't work in transmission (commit 4105f74, port was marked as closed). I've created docker mod instead for linuxserver container. If someone is interested you can check it here.

I would assume the issue is created in that when transmission is announcing to trackers, it includes the callback port which is set in the config, not dynamically by the port it is accessing the internet with. As a result, you still need some intermediary code as you noticed such as what you are working on, or the linux container I have (link). The only way to change I would imagine to change this natively within gluetun/your torrent software would be to get NAT-PMP working properly, which to my knowledge is not (at least with qbittorrent).

SnoringDragon commented 9 months ago

I've more or less the same problem here. I can't figure out how to assign my forwarded port (VPN) to the port of my linked container ? Any idea please ?

I actually have a container I built to solve specifically this problem which I posted under another issue. I hope this helps, and I plenty to update it with the listed suggestions when I get a change, but I have been busy as I am a student.

Would you mind sharing your qbittorrent connection config? Everytime I manually set the port in my qbittorrent config to the port supplied by gluetun, it shows Disconnected

Appologies for taking forever to get back to this, but if you're still looking for an answer, here's what I have

gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    restart: unless-stopped
    labels:
      #Domain routing 

      com.centurylinklabs.watchtower.monitor-only: true
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    volumes:
      - ${DIR}/config/gluetun:/gluetun
      - ${DIR}/tmp/gluetun:/tmp/gluetun
    environment:
      - VPN_SERVICE_PROVIDER=custom
      - VPN_TYPE=wireguard
      - VPN_ENDPOINT_IP=[IP]
      - VPN_ENDPOINT_PORT=[PORT]
      - WIREGUARD_PUBLIC_KEY="[Public Key]"
      - WIREGUARD_PRIVATE_KEY="[Private Key]"
      - WIREGUARD_ADDRESSES="10.2.0.2/32"
      - VPN_PORT_FORWARDING=on
      - VPN_PORT_FORWARDING_PROVIDER=protonvpn
    networks:
      - external-network
      - qbittorrent-proxy

  qbittorrent:
    image: qbittorrentofficial/qbittorrent-nox:latest
    container_name: qbittorrent
    restart: unless-stopped
    labels:
      com.centurylinklabs.watchtower.monitor-only: true
    volumes:
      - ${DIR}/config:/config
      - ${DOWNLOADS}:/downloads
      - /media/{user}/Media/torrent:/downloads2
    network_mode: "service:gluetun"

  qmap:
    image: snoringdragon/gluetun-qbittorrent-port-manager:latest
    container_name: qmap
    restart: unless-stopped
    labels:
      com.centurylinklabs.watchtower.monitor-only: true
    volumes:
      - ${DIR}/tmp/gluetun:/tmp/gluetun
    environment:
      QBITTORRENT_SERVER: localhost
      QBITTORRENT_PORT: 8080
      QBITTORRENT_USER: "[username]"
      QBITTORRENT_PASS: "[password]"
      PORT_FORWARDED: /tmp/gluetun/forwarded_port
      HTTP_S: http
    network_mode: "service:gluetun"

I have recently also done a bunch of updates for improved compatibility.

Stetsed commented 8 months ago

So after some investigating, it seems like the problem isn't with Gluetun not port forwarding. But it's with Qbittorent losing the port binding when the tunnel restarts. So a really hacky way to get around this that I have found seems to be to tell it to listen on all addresses, and then switching it back to the tunnel address. This forces it to rebind to the port and it seems to be a fix for the issue. It just netcats the ip and port, and if it's closed then it forces it. I have pasted it below, and will report if I have any issues(it would be easy to integrate this with the other qbittorent-port-manager scripts.)

#!/bin/bash

cd /root/docker/arr

while true; do
    while [[ ! -f tmp/ip || ! -f tmp/forwarded_port ]]; do
        echo "Waiting for gluetun to connect..."
        sleep 1
        FILE_NO_EXIST=1
    done

    if [[ $FILE_NO_EXIST -eq 1 ]]; then
        FILE_NO_EXIST=0
        echo "gluetun connected"
        sleep 240
    fi

    nc -v -z -w 3 $(cat tmp/ip) $(cat tmp/forwarded_port)

    if [[ $? -eq 0 ]]; then
        sleep 60
    else
        echo "$(date -u +%Y-%m-%d-%H:%M) Port is closed, forcing qbittorent to relisten" | tee -a tmp/port_checker.log

        curl -s -c tmp/qbittorrent-cookies.txt --data "username=$USER&password=$PASSWORD" https://$HOST/api/v2/auth/login >/dev/null

        curl -b tmp/qbittorrent-cookies.txt -X POST https://$HOST/api/v2/app/setPreferences --data 'json={"current_interface_address":"10.2.0.2"}'
        curl -b tmp/qbittorrent-cookies.txt -X POST https://$HOST/api/v2/app/setPreferences --data 'json={"current_interface_address":"0.0.0.0"}'

        sleep 240
    fi
done