qdm12 / gluetun

VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
https://hub.docker.com/r/qmcgaw/gluetun
MIT License
8.07k stars 373 forks source link

Bug: OpenVPN tried to add an IP route which already exists #1939

Open WINOFFRG opened 1 year ago

WINOFFRG commented 1 year ago

Is this urgent?

Yes

Host OS

Ubuntu 20.04.6 LTS

CPU arch

x86_64

VPN service provider

Custom

What are you using to run the container

docker-compose

What is the version of Gluetun

Running version latest built on 2023-04-12T12:34:51.538Z (commit d4f8eea)

What's the problem 🤔

In my docker-compose I have multiple OpenVPN configs, And only one of them is able to connect, Is it because of -/dev/net/tun:/dev/net/tun That maybe something gets locked, Don't have much idea of this. Please check the logs shared below.

Things to note:

2023-11-01T19:09:38Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2023-11-01T19:09:38Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2

Share your logs (at least 10 lines)

========================================
========================================
=============== gluetun ================
========================================
=========== Made with ❤️ by ============
======= https://github.com/qdm12 =======
========================================
========================================

Running version latest built on 2023-04-12T12:34:51.538Z (commit d4f8eea)

🔧 Need help? https://github.com/qdm12/gluetun/discussions/new
🐛 Bug? https://github.com/qdm12/gluetun/issues/new
✨ New feature? https://github.com/qdm12/gluetun/issues/new
☕ Discussion? https://github.com/qdm12/gluetun/discussions/new
💻 Email? quentin.mcgaw@gmail.com
💰 Help me? https://www.paypal.me/qmcgaw https://github.com/sponsors/qdm12
2023-11-01T19:09:27Z INFO [routing] default route found: interface eth0, gateway 172.18.0.1 and assigned IP 172.18.0.3
2023-11-01T19:09:27Z INFO [routing] local ethernet link found: eth0
2023-11-01T19:09:27Z INFO [routing] local ipnet found: 172.18.0.0/16
2023-11-01T19:09:27Z INFO [firewall] enabling...
2023-11-01T19:09:27Z DEBUG [firewall] iptables --policy INPUT DROP
2023-11-01T19:09:27Z DEBUG [firewall] iptables --policy OUTPUT DROP
2023-11-01T19:09:27Z DEBUG [firewall] iptables --policy FORWARD DROP
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --policy INPUT DROP
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --policy OUTPUT DROP
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --policy FORWARD DROP
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append INPUT -i lo -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append INPUT -i lo -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append OUTPUT -o lo -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append OUTPUT -o lo -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append OUTPUT -o eth0 -s 172.18.0.3 -d 172.18.0.0/16 -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append OUTPUT -o eth0 -d ff02::1:ff/104 -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append INPUT -i eth0 -d 172.18.0.0/16 -j ACCEPT
2023-11-01T19:09:27Z INFO [firewall] enabled successfully
2023-11-01T19:09:27Z INFO [storage] merging by most recent 13064 hardcoded servers and 13064 servers read from /gluetun/servers.json
2023-11-01T19:09:27Z DEBUG [netlink] IPv6 is not supported after searching 2 links and 0 routes
2023-11-01T19:09:27Z INFO Alpine version: 3.17.3
2023-11-01T19:09:27Z INFO OpenVPN 2.4 version: 2.4.12
2023-11-01T19:09:27Z INFO OpenVPN 2.5 version: 2.5.8
2023-11-01T19:09:27Z INFO Unbound version: 1.17.1
2023-11-01T19:09:27Z INFO IPtables version: v1.8.8
2023-11-01T19:09:27Z INFO Settings summary:
├── VPN settings:
|   ├── VPN provider settings:
|   |   ├── Name: custom
|   |   └── Server selection settings:
|   |       ├── VPN type: openvpn
|   |       └── OpenVPN server selection settings:
|   |           ├── Protocol: UDP
|   |           └── Custom configuration file: /gluetun/AAA.conf
|   └── OpenVPN settings:
|       ├── OpenVPN version: 2.5
|       ├── User: [not set]
|       ├── Password: [not set]
|       ├── Custom configuration file: /gluetun/AAA.conf
|       ├── Network interface: tun0
|       ├── Run OpenVPN as: root
|       └── Verbosity level: 1
├── DNS settings:
|   ├── DNS server address to use: 127.0.0.1
|   ├── Keep existing nameserver(s): no
|   └── DNS over TLS settings:
|       ├── Enabled: yes
|       ├── Update period: every 24h0m0s
|       ├── Unbound settings:
|       |   ├── Authoritative servers:
|       |   |   └── cloudflare
|       |   ├── Caching: yes
|       |   ├── IPv6: no
|       |   ├── Verbosity level: 1
|       |   ├── Verbosity details level: 0
|       |   ├── Validation log level: 0
|       |   ├── System user: root
|       |   └── Allowed networks:
|       |       ├── 0.0.0.0/0
|       |       └── ::/0
|       └── DNS filtering settings:
|           ├── Block malicious: yes
|           ├── Block ads: no
|           ├── Block surveillance: no
|           └── Blocked IP networks:
|               ├── 127.0.0.1/8
|               ├── 10.0.0.0/8
|               ├── 172.16.0.0/12
|               ├── 192.168.0.0/16
|               ├── 169.254.0.0/16
|               ├── ::1/128
|               ├── fc00::/7
|               ├── fe80::/10
|               ├── ::ffff:7f00:1/104
|               ├── ::ffff:a00:0/104
|               ├── ::ffff:a9fe:0/112
|               ├── ::ffff:ac10:0/108
|               └── ::ffff:c0a8:0/112
├── Firewall settings:
|   └── Enabled: yes
├── Log settings:
|   └── Log level: DEBUG
├── Health settings:
|   ├── Server listening address: 127.0.0.1:9999
|   ├── Target address: cloudflare.com:443
|   ├── Read header timeout: 100ms
|   ├── Read timeout: 500ms
|   └── VPN wait durations:
|       ├── Initial duration: 6s
|       └── Additional duration: 5s
├── Shadowsocks server settings:
|   └── Enabled: no
├── HTTP proxy settings:
|   ├── Enabled: yes
|   ├── Listening address: :8888
|   ├── User: 
|   ├── Password: [not set]
|   ├── Stealth mode: yes
|   ├── Log: yes
|   ├── Read header timeout: 1s
|   └── Read timeout: 3s
├── Control server settings:
|   ├── Listening address: :8000
|   └── Logging: yes
├── OS Alpine settings:
|   ├── Process UID: 1000
|   └── Process GID: 1000
├── Public IP settings:
|   ├── Fetching: every 12h0m0s
|   └── IP file path: /tmp/gluetun/ip
└── Version settings:
    └── Enabled: yes
2023-11-01T19:09:27Z INFO [routing] default route found: interface eth0, gateway 172.18.0.1 and assigned IP 172.18.0.3
2023-11-01T19:09:27Z DEBUG [routing] ip rule add from 172.18.0.3/32 lookup 200 pref 100
2023-11-01T19:09:27Z INFO [routing] adding route for 0.0.0.0/0
2023-11-01T19:09:27Z DEBUG [routing] ip route replace 0.0.0.0/0 via 172.18.0.1 dev eth0 table 200
2023-11-01T19:09:27Z INFO [firewall] setting allowed subnets...
2023-11-01T19:09:27Z INFO [routing] default route found: interface eth0, gateway 172.18.0.1 and assigned IP 172.18.0.3
2023-11-01T19:09:27Z DEBUG [routing] ip rule add to 172.18.0.0/16 lookup 254 pref 98
2023-11-01T19:09:27Z INFO [http server] http server listening on [::]:8000
2023-11-01T19:09:27Z INFO [firewall] allowing VPN connection...
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append OUTPUT -d 103.125.147.49 -o eth0 -p udp -m udp --dport 443 -j ACCEPT
2023-11-01T19:09:27Z INFO [dns over tls] using plaintext DNS at address 1.1.1.1
2023-11-01T19:09:27Z INFO [http proxy] listening on :8888
2023-11-01T19:09:27Z INFO [healthcheck] listening on 127.0.0.1:9999
2023-11-01T19:09:27Z DEBUG [firewall] iptables --append OUTPUT -o tun0 -j ACCEPT
2023-11-01T19:09:27Z DEBUG [firewall] ip6tables --append OUTPUT -o tun0 -j ACCEPT
2023-11-01T19:09:27Z INFO [openvpn] DEPRECATED OPTION: --cipher set to 'AES-128-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-128-CBC' to --data-ciphers or change --cipher 'AES-128-CBC' to --data-ciphers-fallback 'AES-128-CBC' to silence this warning.
2023-11-01T19:09:27Z INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov  2 2022
2023-11-01T19:09:27Z INFO [openvpn] library versions: OpenSSL 3.0.8 7 Feb 2023, LZO 2.10
2023-11-01T19:09:27Z INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]103.125.147.49:443
2023-11-01T19:09:27Z INFO [openvpn] UDP link local: (not bound)
2023-11-01T19:09:27Z INFO [openvpn] UDP link remote: [AF_INET]103.125.147.49:443
2023-11-01T19:09:27Z INFO [openvpn] [server] Peer Connection Initiated with [AF_INET]103.125.147.49:443
2023-11-01T19:09:28Z ERROR [openvpn] Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:6: block-outside-dns (2.5.8)
2023-11-01T19:09:28Z INFO [openvpn] TUN/TAP device tun0 opened
2023-11-01T19:09:28Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-11-01T19:09:28Z INFO [openvpn] /sbin/ip link set dev tun0 up
2023-11-01T19:09:28Z INFO [openvpn] /sbin/ip addr add dev tun0 local 10.11.0.26 peer 10.11.0.25
2023-11-01T19:09:31Z INFO [openvpn] UID set to nonrootuser
2023-11-01T19:09:31Z INFO [openvpn] Initialization Sequence Completed
2023-11-01T19:09:31Z INFO [dns over tls] downloading DNS over TLS cryptographic files
2023-11-01T19:09:34Z INFO [healthcheck] program has been unhealthy for 6s: restarting VPN (see https://github.com/qdm12/gluetun/wiki/Healthcheck)
2023-11-01T19:09:34Z INFO [vpn] stopping
2023-11-01T19:09:34Z ERROR [vpn] cannot get version information: Get "https://api.github.com/repos/qdm12/gluetun/commits": context canceled
2023-11-01T19:09:34Z INFO [vpn] starting
2023-11-01T19:09:34Z INFO [firewall] allowing VPN connection...
2023-11-01T19:09:34Z INFO [openvpn] DEPRECATED OPTION: --cipher set to 'AES-128-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-128-CBC' to --data-ciphers or change --cipher 'AES-128-CBC' to --data-ciphers-fallback 'AES-128-CBC' to silence this warning.
2023-11-01T19:09:34Z INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov  2 2022
2023-11-01T19:09:34Z INFO [openvpn] library versions: OpenSSL 3.0.8 7 Feb 2023, LZO 2.10
2023-11-01T19:09:34Z INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]103.125.147.49:443
2023-11-01T19:09:34Z INFO [openvpn] UDP link local: (not bound)
2023-11-01T19:09:34Z INFO [openvpn] UDP link remote: [AF_INET]103.125.147.49:443
2023-11-01T19:09:34Z INFO [openvpn] [server] Peer Connection Initiated with [AF_INET]103.125.147.49:443
2023-11-01T19:09:36Z WARN [dns over tls] cannot update files: Get "https://www.internic.net/domain/named.root": dial tcp: lookup www.internic.net on 127.0.0.11:53: write udp 172.18.0.3:40503->1.1.1.1:53: write: operation not permitted
2023-11-01T19:09:36Z INFO [dns over tls] attempting restart in 10s
2023-11-01T19:09:36Z ERROR [openvpn] Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:6: block-outside-dns (2.5.8)
2023-11-01T19:09:36Z INFO [openvpn] TUN/TAP device tun0 opened
2023-11-01T19:09:36Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-11-01T19:09:36Z INFO [openvpn] /sbin/ip link set dev tun0 up
2023-11-01T19:09:36Z INFO [openvpn] /sbin/ip addr add dev tun0 local 10.11.0.30 peer 10.11.0.29
2023-11-01T19:09:38Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2023-11-01T19:09:38Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2
2023-11-01T19:09:38Z INFO [openvpn] UID set to nonrootuser
2023-11-01T19:09:38Z INFO [openvpn] Initialization Sequence Completed
2023-11-01T19:09:45Z INFO [healthcheck] program has been unhealthy for 11s: restarting VPN (see https://github.com/qdm12/gluetun/wiki/Healthcheck)
2023-11-01T19:09:45Z INFO [vpn] stopping
2023-11-01T19:09:45Z INFO [vpn] starting

Share your configuration

proxy-in2:
  image: qmcgaw/gluetun
  container_name: ovpn-AAA
  cap_add:
    - NET_ADMIN
  devices:
    - /dev/net/tun:/dev/net/tun
  ports:
    - 7100:8888/tcp
  volumes:
    - ./data/AAA.conf:/gluetun/AAA:ro
  environment:
    - VPN_SERVICE_PROVIDER=custom
    - OPENVPN_CUSTOM_CONFIG=/gluetun/AAA.conf
    # - HTTPPROXY_LOG=on
    - HTTPPROXY=ON
    - HTTPPROXY_STEALTH=on
  restart: always 

proxy-in3:
  image: qmcgaw/gluetun
  container_name: ovpn-BBB
  cap_add:
    - NET_ADMIN
  devices:
    - /dev/net/tun:/dev/net/tun
  ports:
    - 7200:8888/tcp
  volumes:
    - ./data/BBB.conf:/gluetun/BBB.conf
  environment:
    - VPN_SERVICE_PROVIDER=custom
    - OPENVPN_CUSTOM_CONFIG=/gluetun/BBB.conf
    - HTTPPROXY_LOG=on
    - HTTPPROXY=ON
    - HTTPPROXY_STEALTH=on
    - LOG_LEVEL=debug
  restart: always
ezekieldas commented 1 year ago

Can you try these in separate containers? With your existing configuration I believe there's conflict with device names, addressing, etc.

This item from the wiki includes a short comment but I think it's key: "You can easily run multiple Gluetun containers..." does not mean you can run multiple gluetun services in the same container.

https://github.com/qdm12/gluetun-wiki/blob/main/setup/advanced/multiple-gluetun.md

WINOFFRG commented 1 year ago

Hey! Kindly check the docker-compose above, These are two separate containers only, Both mounted on host. gluetun service is running in isloation in those two containers with different custom configs. Only query I have in my mind maybe something related to /dev/net/tun:/dev/net/tun is causing issue as both containers are trying to mount over same network, not sure don't have much idea of this.

ezekieldas commented 1 year ago

You've omitted the services: element so it appears you're attempting to run two services rather than two separate containers. So assuming these are in fact separate containers I'd suggest trying a second tun device on the container host. I just tried this and this seems to work:

# mknod /dev/net/tun0 c 10 200
# chmod 666 /dev/net/tun0
# docker compose up -d
# docker logs gluetun

|   └── Wireguard settings:
        [ ... ]
|       └── Network interface: tun0
WINOFFRG commented 1 year ago

Hey Hi! Sorry yes my bad services is there, however Looks like this isn't working for me. I ran the first two commands which added the tun0

Not sure If I need to change in second container config for device, however I tried all these combinations below

devices:
      - /dev/net/tun0:/dev/net/tun0
devices:
      - /dev/net/tun:/dev/net/tun
devices:
      - /dev/net/tun0:/dev/net/tun

and the results are still same. The first container is running fine, but even if I remove and stop that container. The second one still doesn't work

|   └── OpenVPN settings:
|       ├── OpenVPN version: 2.5
|       ├── User: [not set]
|       ├── Password: [not set]
|       ├── Custom configuration file: /gluetun/BBB.conf
|       ├── Network interface: tun0
|       ├── Run OpenVPN as: root
|       └── Verbosity level: 1

Even if one gluetun container is running across the system. I also tried with with configs suspecting some issue with current ovpn configs. They seem to work fine on other clients, but face the above issue just here.

ezekieldas commented 1 year ago

I'm not certain if the /dev/net/tun devices are the root cause of this issue. Something that concerns me is you seem to have two containers under a single service: definition. What happens if you separate each one into its own compose file and bring them up separately?

qdm12 commented 1 year ago

/dev/net/tun is unlikely the cause here. As the error message OpenVPN tried to add an IP route which already exists says, this is a routing problem. Usually each container has their own routing, maybe on your setup it's mixed for some reason. Try running a container (like alpine) connected to the Docker network where 1 gluetun instance is running and run ip route show table all?

WINOFFRG commented 1 year ago

Hi! Thanks for checking, Running just one gluetun container now, and below is the output for ip route show table all command

100.65.251.80 dev tailscale0 table 52 
100.100.100.100 dev tailscale0 table 52 
100.110.223.26 dev tailscale0 table 52 
100.115.52.112 dev tailscale0 table 52 
default via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100 
10.1.0.0/24 dev eth0 proto kernel scope link src 10.1.0.4 
168.63.129.16 via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100 
169.254.169.254 via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-377c5a658b57 proto kernel scope link src 172.18.0.1 
172.19.0.0/16 dev br-f9db474fc9e0 proto kernel scope link src 172.19.0.1 linkdown 
172.20.0.0/16 dev br-1cc25968a922 proto kernel scope link src 172.20.0.1 
172.21.0.0/16 dev br-705dffb3c3fc proto kernel scope link src 172.21.0.1 linkdown 
broadcast 10.1.0.0 dev eth0 table local proto kernel scope link src 10.1.0.4 
local 10.1.0.4 dev eth0 table local proto kernel scope host src 10.1.0.4 
broadcast 10.1.0.255 dev eth0 table local proto kernel scope link src 10.1.0.4 
local 100.105.116.126 dev tailscale0 table local proto kernel scope host src 100.105.116.126 
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
broadcast 172.17.0.0 dev docker0 table local proto kernel scope link src 172.17.0.1 linkdown 
local 172.17.0.1 dev docker0 table local proto kernel scope host src 172.17.0.1 
broadcast 172.17.255.255 dev docker0 table local proto kernel scope link src 172.17.0.1 linkdown 
broadcast 172.18.0.0 dev br-377c5a658b57 table local proto kernel scope link src 172.18.0.1 
local 172.18.0.1 dev br-377c5a658b57 table local proto kernel scope host src 172.18.0.1 
broadcast 172.18.255.255 dev br-377c5a658b57 table local proto kernel scope link src 172.18.0.1 
broadcast 172.19.0.0 dev br-f9db474fc9e0 table local proto kernel scope link src 172.19.0.1 linkdown 
local 172.19.0.1 dev br-f9db474fc9e0 table local proto kernel scope host src 172.19.0.1 
broadcast 172.19.255.255 dev br-f9db474fc9e0 table local proto kernel scope link src 172.19.0.1 linkdown 
broadcast 172.20.0.0 dev br-1cc25968a922 table local proto kernel scope link src 172.20.0.1 
local 172.20.0.1 dev br-1cc25968a922 table local proto kernel scope host src 172.20.0.1 
broadcast 172.20.255.255 dev br-1cc25968a922 table local proto kernel scope link src 172.20.0.1 
broadcast 172.21.0.0 dev br-705dffb3c3fc table local proto kernel scope link src 172.21.0.1 linkdown 
local 172.21.0.1 dev br-705dffb3c3fc table local proto kernel scope host src 172.21.0.1 
broadcast 172.21.255.255 dev br-705dffb3c3fc table local proto kernel scope link src 172.21.0.1 linkdown 
fd7a:115c:a1e0::/48 dev tailscale0 table 52 metric 1024 pref medium
::1 dev lo proto kernel metric 256 pref medium
fd7a:115c:a1e0:ab12:4843:cd96:6269:747e dev tailscale0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev br-377c5a658b57 proto kernel metric 256 pref medium
fe80::/64 dev docker0 proto kernel metric 256 linkdown pref medium
fe80::/64 dev br-705dffb3c3fc proto kernel metric 256 linkdown pref medium
fe80::/64 dev tailscale0 proto kernel metric 256 pref medium
fe80::/64 dev br-f9db474fc9e0 proto kernel metric 256 linkdown pref medium
fe80::/64 dev br-1cc25968a922 proto kernel metric 256 pref medium
fe80::/64 dev veth4f1a8dc proto kernel metric 256 pref medium
fe80::/64 dev veth1a4fefe proto kernel metric 256 pref medium
fe80::/64 dev veth3fb88b3 proto kernel metric 256 pref medium
fe80::/64 dev veth794ab31 proto kernel metric 256 pref medium
fe80::/64 dev veth9067f1b proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fd7a:115c:a1e0:ab12:4843:cd96:6269:747e dev tailscale0 table local proto kernel metric 0 pref medium
local fe80::42:32ff:fe8d:2d23 dev br-f9db474fc9e0 table local proto kernel metric 0 pref medium
local fe80::42:35ff:fe36:45a2 dev br-377c5a658b57 table local proto kernel metric 0 pref medium
local fe80::42:9cff:fed2:b0f8 dev br-1cc25968a922 table local proto kernel metric 0 pref medium
local fe80::42:cfff:fec1:f92d dev br-705dffb3c3fc table local proto kernel metric 0 pref medium
local fe80::42:e4ff:fefc:3b06 dev docker0 table local proto kernel metric 0 pref medium
local fe80::20d:3aff:fe3e:63f dev eth0 table local proto kernel metric 0 pref medium
local fe80::3913:7fd4:cdcd:99eb dev tailscale0 table local proto kernel metric 0 pref medium
local fe80::40e2:afff:fe18:ebf9 dev veth1a4fefe table local proto kernel metric 0 pref medium
local fe80::7c19:69ff:fe8c:dbb1 dev veth3fb88b3 table local proto kernel metric 0 pref medium
local fe80::ac09:83ff:fe14:da9d dev veth794ab31 table local proto kernel metric 0 pref medium
local fe80::b861:c7ff:fed4:b28e dev veth4f1a8dc table local proto kernel metric 0 pref medium
local fe80::e850:d0ff:fef5:9c7a dev veth9067f1b table local proto kernel metric 0 pref medium
multicast ff00::/8 dev eth0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev br-377c5a658b57 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev docker0 table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev br-705dffb3c3fc table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev tailscale0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev br-f9db474fc9e0 table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev br-1cc25968a922 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth4f1a8dc table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth1a4fefe table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth3fb88b3 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth794ab31 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth9067f1b table local proto kernel metric 256 pref medium
WINOFFRG commented 1 year ago

Hi! Any updates on this issue?

WINOFFRG commented 11 months ago

Hi! Sorry to bother, Any updates? Just for update, As I was browsing through other issues https://github.com/qdm12/gluetun/issues/1986 this is also the one I encounter. I just modify it likewise

Before:

<connection>
proto tcp-client
remote REDACTED_IP 443
</connection>
<connection>
proto udp
remote REDACTED_IP 443
</connection>

After:

proto udp
remote REDACTED_IP 443

And it starts to work, however not such what happended I have tried few past releases as well the same error keeps on popping up. I have tried restarting the device and reinstalling docker, still the same. Kindly help if any updates are available on this.

qdm12 commented 11 months ago

Sorry for the delay;

Hi! Thanks for checking, Running just one gluetun container now, and below is the output for ip route show table all command

That's a lot of routes! Is this from a command ran in Gluetun i.e. docker exec gluetun ip route show table all (or a container connected to Gluetun), or is this on your host??

Just for update, As I was browsing through other issues https://github.com/qdm12/gluetun/issues/1986 this is also the one I encounter.

I'm not sure how this is relevant?

Maybe you meant the original issue #1967 from which that issue was created? If so, the commit 75fd86962542eab693d7698c0e9e731b2f391bd1 (latest image) fixed support for tcp-client for the custom provider.

however not such what happended I have tried few past releases as well the same error keeps on popping up.

That part of your sentence is kind of confusing 😄 So running proto udp works on everything, but proto tcp-client doesn't right?

WINOFFRG commented 11 months ago

Hey Hi! Thanks a lot for responsding. Just after your response, I tried checking it again to share proper error details, however I'm not sure what exactly happened. Without updating the image, docker compose or even the VPN configs, it started working again. It happened earlier that it sometimes ran and after restart stopped working, however that's not happening now. I will wait for a little more time before closing this issue ...

I ran that command on host itself.

I'm not sure how this is relevant?

I thought could be related or could even be another issue, so shared the context of that also maybe could have helped in debugging. To give more details, My ovpn configs are formmatted like below:

<connection>
proto tcp-client
remote MASKED_IP 443
</connection>
<connection>
proto udp
remote MASKED_IP 443
</connection>
#push "redirect-gateway def1"
tls-client
remote-cert-tls server
cipher AES-128-CBC
nobind
dev tun0
pull
resolv-retry infinite
#compress lzo
tun-mtu 1500
tun-mtu-extra 32
mssfix 1450
persist-tun
persist-key
verb 3
route-method exe
route-delay 2

...

And all configs in the same format used to give error on Running version latest built on 2023-12-09T17:29:04.776Z (commit 657b4b7)

  1. ERROR [vpn] allowing VPN connection through firewall: allowing output traffic through VPN connection: command failed: "iptables --append OUTPUT -d MASKED_IP -o eth0 -p tcp-client -m tcp-client --dport 443 -j ACCEPT": iptables v1.8.9 (legacy): unknown protocol "tcp-client" specified Try iptables -h' or 'iptables --help' for more information.: exit status 2

Regarding which I think you have mentioned for fix in latest commit, earlier I used to solve this issue by renaming tls-client to tls but then a new error comes up.

  1. ERROR [openvpn] Each 'connection' block must contain exactly one 'remote' directive This was self understood to me so I just removed connection block in the case as shown below:
<connection>
proto tcp
remote MASKED_IP 443
</connection>
<connection>
proto udp
remote MASKED_IP 443
</connection>
proto udp
remote MASKED_IP 443

and this used to work, but unfortunately the last time I restarted all container I have had to raise this issue. But somehow this now seems to be fixed without any update 🤔 Not sure if there's anything pulled from remote internally. However regarding, Issue 2 mentioned above I'm still doing that like removing the connection block. Not sure why it works with Open VPN directly and not here on Gluetun. If required, I can share the Open VPN config privately.

WINOFFRG commented 11 months ago

Unfortunately started getting the same error now.

2023-12-23T15:27:13Z INFO [healthcheck] program has been unhealthy for 21s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
2023-12-23T15:27:13Z INFO [vpn] stopping
2023-12-23T15:27:13Z INFO [vpn] starting
2023-12-23T15:27:13Z INFO [firewall] allowing VPN connection...
2023-12-23T15:27:13Z INFO [openvpn] DEPRECATED OPTION: --cipher set to 'AES-128-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-128-CBC' to --data-ciphers or change --cipher 'AES-128-CBC' to --data-ciphers-fallback 'AES-128-CBC' to silence this warning.
2023-12-23T15:27:13Z INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov  2 2022
2023-12-23T15:27:13Z INFO [openvpn] library versions: OpenSSL 3.1.4 24 Oct 2023, LZO 2.10
2023-12-23T15:27:13Z INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]MASKED_IP:443
2023-12-23T15:27:13Z INFO [openvpn] UDP link local: (not bound)
2023-12-23T15:27:13Z INFO [openvpn] UDP link remote: [AF_INET]MASKED_IP:443
2023-12-23T15:27:13Z INFO [openvpn] [server] Peer Connection Initiated with [AF_INET]MASKED_IP:443
2023-12-23T15:27:14Z ERROR [openvpn] Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:6: block-outside-dns (2.5.8)
2023-12-23T15:27:14Z INFO [openvpn] TUN/TAP device tun0 opened
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip link set dev tun0 up
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip addr add dev tun0 local 10.11.0.38 peer 10.11.0.37
2023-12-23T15:27:16Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2023-12-23T15:27:16Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2
2023-12-23T15:27:16Z INFO [openvpn] UID set to nonrootuser
2023-12-23T15:27:16Z INFO [openvpn] Initialization Sequence Completed
2023-12-23T15:27:18Z ERROR [ip getter] Get "https://ipinfo.io/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - retrying in 20s

Output from docker exec ovpn ip route show table all

default via 172.18.0.1 dev eth0 table 200 
0.0.0.0/1 via 10.11.0.41 dev tun0 
default via 172.18.0.1 dev eth0 
10.11.0.1 via 10.11.0.41 dev tun0 
10.11.0.41 dev tun0 proto kernel scope link src 10.11.0.42 
128.0.0.0/1 via 10.11.0.41 dev tun0 
134.209.156.232 via 172.18.0.1 dev eth0 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2 
local 10.11.0.42 dev tun0 table local proto kernel scope host src 10.11.0.42 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
local 172.18.0.2 dev eth0 table local proto kernel scope host src 172.18.0.2 
broadcast 172.18.255.255 dev eth0 table local proto kernel scope link src 172.18.0.2 
cedstrom commented 7 months ago

There is definitely something strange going on here, I am seeing a similar error in my setup. The host network is 10.0.0.0/8, the docker network is 192.168.0.0/20 and the routes pushed from the server are

which results in this routing table in the container:

default via 192.168.6.1 dev eth0 table 200
default via 192.168.6.1 dev eth0
172.27.224.0/20 dev tun0 proto kernel scope link src 172.27.227.41
172.31.0.0/16 via 172.27.224.1 dev tun0
192.168.6.0/24 dev eth0 proto kernel scope link src 192.168.6.2
192.168.40.0/24 via 172.27.224.1 dev tun0
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
local 172.27.227.41 dev tun0 table local proto kernel scope host src 172.27.227.41
broadcast 172.27.239.255 dev tun0 table local proto kernel scope link src 172.27.227.41
local 192.168.6.2 dev eth0 table local proto kernel scope host src 192.168.6.2
broadcast 192.168.6.255 dev eth0 table local proto kernel scope link src 192.168.6.2

Yet I still get

2024-04-19T21:51:40Z INFO [openvpn] TUN/TAP device tun0 opened
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip link set dev tun0 up
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip addr add dev tun0 172.27.234.23/20
2024-04-19T21:51:41Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2024-04-19T21:51:41Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2

and a non-functional connection upon startup.

Also, if I run openvpn from from the host with the same config file, the connection works fine with no routing errors.

pmorch commented 6 months ago

Hi,

I've run into this also. Changing from VyprVPN to SurfShark made it go away for me. So what OpenVPN servers are the other affected users using?

Details

I used this docker-compose.yaml in two different directories, vpr1 and vpr2:

version: "3"
secrets:
  openvpn_user:
    file: ../gluetun/vyprvpn_user.txt
  openvpn_password:
    file: ../gluetun/vyprvpn_password.txt
services:
  vpnuser:
    image: ubuntu
    network_mode: "service:gluetun"
    command: [ 'sleep', 'infinity' ]
    restart: unless-stopped
    depends_on:
      gluetun:
        condition: service_healthy
  gluetun:
    image: qmcgaw/gluetun
    # container_name: gluetun
    cap_add:
      - NET_ADMIN
    secrets:
      - openvpn_user
      - openvpn_password
    environment:
      - VPN_SERVICE_PROVIDER=vyprvpn
      - SERVER_HOSTNAMES=dk1.vyprvpn.com
      - HTTPPROXY=on
    volumes:
      # See ../gluetun/docker-compose-update.yaml
      - ../gluetun/data:/gluetun
    devices:
      - /dev/net/tun:/dev/net/tun
    restart: unless-stopped

The purpose of the vpnuser service is that docker compose up doesn't finish until the VPN is healthy so we can see whether it comes up or not.

So go into vpr1 and run

docker compose up -d

Now go into vpr2 and run:

for i in $(seq 10) ; do echo attempt# $i ; sudo docker compose down && sudo docker compose up -d ; done

In this second directory, gluetun failed to come up in 9 out of 10 attempts (because of something funky with VyprVPN?).

I now signed up for surfshark (something I've been wanting to do for a while anyway), and created two more directories, ss1 and ss2 also with identical docker-compose.yaml files, but:

$ diff -u vpr1/docker-compose.yaml ss1/docker-compose.yaml
--- vpr1/docker-compose.yaml    2024-04-28 00:45:19.610612914 +0200
+++ ss1/docker-compose.yaml 2024-04-28 00:40:16.807990239 +0200
@@ -1,9 +1,9 @@
 version: "3"
 secrets:
   openvpn_user:
-    file: ../gluetun/vyprvpn_user.txt
+    file: ../gluetun/surfshark_user.txt
   openvpn_password:
-    file: ../gluetun/vyprvpn_password.txt
+    file: ../gluetun/surfshark_password.txt
 services:
   vpnuser:
     image: ubuntu
@@ -22,8 +22,8 @@
       - openvpn_user
       - openvpn_password
     environment:
-      - VPN_SERVICE_PROVIDER=vyprvpn
-      - SERVER_HOSTNAMES=dk1.vyprvpn.com
+      - VPN_SERVICE_PROVIDER=surfshark
+      - SERVER_HOSTNAMES=ch-zur.prod.surfshark.com
       - HTTPPROXY=on
     volumes:
       # See ../gluetun/docker-compose-update.yaml

Doing the same thing. Starting ss1 and then letting ss2 go down and then up in a for loop succeded all 10 times.

cedstrom commented 6 months ago

So what OpenVPN servers are the other affected users using?

I'm using an internal company VPN. This malady also seems to be spreading. I had several tunnels set up and now more of them are failing with this issue without any changes.

pmorch commented 6 months ago

I had to change (again) from SurfShark to AirVPN (because of an unrelated matter: Surfshark not supporting port forwarding), and AirVPN also doesn't have this problem at all. But VyprVPN did 100% of the time.

qdm12 commented 1 month ago

Hello everyone, is this problem still happening today on the latest image?

@WINOFFRG Please re-read carefully:

Try running a container (like alpine) connected to the Docker network where 1 gluetun instance is running and run ip route show table all?

I did not suggest to run that command within a Gluetun container, just in another simple container (like alpine) connected to the same Docker network as Gluetun (not with its network stack being Gluetun).

Also that old bug unknown protocol "tcp-client" has been fixed for a while now.