Open WINOFFRG opened 1 year ago
Can you try these in separate containers? With your existing configuration I believe there's conflict with device names, addressing, etc.
This item from the wiki includes a short comment but I think it's key: "You can easily run multiple Gluetun containers..." does not mean you can run multiple gluetun services in the same container.
https://github.com/qdm12/gluetun-wiki/blob/main/setup/advanced/multiple-gluetun.md
Hey! Kindly check the docker-compose
above, These are two separate containers only, Both mounted on host. gluetun
service is running in isloation in those two containers with different custom configs. Only query I have in my mind maybe something related to /dev/net/tun:/dev/net/tun
is causing issue as both containers are trying to mount over same network, not sure don't have much idea of this.
You've omitted the services:
element so it appears you're attempting to run two services rather than two separate containers. So assuming these are in fact separate containers I'd suggest trying a second tun
device on the container host. I just tried this and this seems to work:
# mknod /dev/net/tun0 c 10 200
# chmod 666 /dev/net/tun0
# docker compose up -d
# docker logs gluetun
| └── Wireguard settings:
[ ... ]
| └── Network interface: tun0
Hey Hi! Sorry yes my bad services
is there, however Looks like this isn't working for me. I ran the first two commands which added the tun0
Not sure If I need to change in second container config for device, however I tried all these combinations below
devices:
- /dev/net/tun0:/dev/net/tun0
devices:
- /dev/net/tun:/dev/net/tun
devices:
- /dev/net/tun0:/dev/net/tun
and the results are still same. The first container is running fine, but even if I remove and stop that container. The second one still doesn't work
| └── OpenVPN settings:
| ├── OpenVPN version: 2.5
| ├── User: [not set]
| ├── Password: [not set]
| ├── Custom configuration file: /gluetun/BBB.conf
| ├── Network interface: tun0
| ├── Run OpenVPN as: root
| └── Verbosity level: 1
Even if one gluetun
container is running across the system. I also tried with with configs suspecting some issue with current ovpn configs. They seem to work fine on other clients, but face the above issue just here.
I'm not certain if the /dev/net/tun
devices are the root cause of this issue. Something that concerns me is you seem to have two containers under a single service:
definition. What happens if you separate each one into its own compose file and bring them up separately?
/dev/net/tun
is unlikely the cause here. As the error message OpenVPN tried to add an IP route which already exists
says, this is a routing problem. Usually each container has their own routing, maybe on your setup it's mixed for some reason. Try running a container (like alpine
) connected to the Docker network where 1 gluetun instance is running and run ip route show table all
?
Hi! Thanks for checking, Running just one gluetun container now, and below is the output for ip route show table all
command
100.65.251.80 dev tailscale0 table 52
100.100.100.100 dev tailscale0 table 52
100.110.223.26 dev tailscale0 table 52
100.115.52.112 dev tailscale0 table 52
default via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100
10.1.0.0/24 dev eth0 proto kernel scope link src 10.1.0.4
168.63.129.16 via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100
169.254.169.254 via 10.1.0.1 dev eth0 proto dhcp src 10.1.0.4 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-377c5a658b57 proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-f9db474fc9e0 proto kernel scope link src 172.19.0.1 linkdown
172.20.0.0/16 dev br-1cc25968a922 proto kernel scope link src 172.20.0.1
172.21.0.0/16 dev br-705dffb3c3fc proto kernel scope link src 172.21.0.1 linkdown
broadcast 10.1.0.0 dev eth0 table local proto kernel scope link src 10.1.0.4
local 10.1.0.4 dev eth0 table local proto kernel scope host src 10.1.0.4
broadcast 10.1.0.255 dev eth0 table local proto kernel scope link src 10.1.0.4
local 100.105.116.126 dev tailscale0 table local proto kernel scope host src 100.105.116.126
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
broadcast 172.17.0.0 dev docker0 table local proto kernel scope link src 172.17.0.1 linkdown
local 172.17.0.1 dev docker0 table local proto kernel scope host src 172.17.0.1
broadcast 172.17.255.255 dev docker0 table local proto kernel scope link src 172.17.0.1 linkdown
broadcast 172.18.0.0 dev br-377c5a658b57 table local proto kernel scope link src 172.18.0.1
local 172.18.0.1 dev br-377c5a658b57 table local proto kernel scope host src 172.18.0.1
broadcast 172.18.255.255 dev br-377c5a658b57 table local proto kernel scope link src 172.18.0.1
broadcast 172.19.0.0 dev br-f9db474fc9e0 table local proto kernel scope link src 172.19.0.1 linkdown
local 172.19.0.1 dev br-f9db474fc9e0 table local proto kernel scope host src 172.19.0.1
broadcast 172.19.255.255 dev br-f9db474fc9e0 table local proto kernel scope link src 172.19.0.1 linkdown
broadcast 172.20.0.0 dev br-1cc25968a922 table local proto kernel scope link src 172.20.0.1
local 172.20.0.1 dev br-1cc25968a922 table local proto kernel scope host src 172.20.0.1
broadcast 172.20.255.255 dev br-1cc25968a922 table local proto kernel scope link src 172.20.0.1
broadcast 172.21.0.0 dev br-705dffb3c3fc table local proto kernel scope link src 172.21.0.1 linkdown
local 172.21.0.1 dev br-705dffb3c3fc table local proto kernel scope host src 172.21.0.1
broadcast 172.21.255.255 dev br-705dffb3c3fc table local proto kernel scope link src 172.21.0.1 linkdown
fd7a:115c:a1e0::/48 dev tailscale0 table 52 metric 1024 pref medium
::1 dev lo proto kernel metric 256 pref medium
fd7a:115c:a1e0:ab12:4843:cd96:6269:747e dev tailscale0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev br-377c5a658b57 proto kernel metric 256 pref medium
fe80::/64 dev docker0 proto kernel metric 256 linkdown pref medium
fe80::/64 dev br-705dffb3c3fc proto kernel metric 256 linkdown pref medium
fe80::/64 dev tailscale0 proto kernel metric 256 pref medium
fe80::/64 dev br-f9db474fc9e0 proto kernel metric 256 linkdown pref medium
fe80::/64 dev br-1cc25968a922 proto kernel metric 256 pref medium
fe80::/64 dev veth4f1a8dc proto kernel metric 256 pref medium
fe80::/64 dev veth1a4fefe proto kernel metric 256 pref medium
fe80::/64 dev veth3fb88b3 proto kernel metric 256 pref medium
fe80::/64 dev veth794ab31 proto kernel metric 256 pref medium
fe80::/64 dev veth9067f1b proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fd7a:115c:a1e0:ab12:4843:cd96:6269:747e dev tailscale0 table local proto kernel metric 0 pref medium
local fe80::42:32ff:fe8d:2d23 dev br-f9db474fc9e0 table local proto kernel metric 0 pref medium
local fe80::42:35ff:fe36:45a2 dev br-377c5a658b57 table local proto kernel metric 0 pref medium
local fe80::42:9cff:fed2:b0f8 dev br-1cc25968a922 table local proto kernel metric 0 pref medium
local fe80::42:cfff:fec1:f92d dev br-705dffb3c3fc table local proto kernel metric 0 pref medium
local fe80::42:e4ff:fefc:3b06 dev docker0 table local proto kernel metric 0 pref medium
local fe80::20d:3aff:fe3e:63f dev eth0 table local proto kernel metric 0 pref medium
local fe80::3913:7fd4:cdcd:99eb dev tailscale0 table local proto kernel metric 0 pref medium
local fe80::40e2:afff:fe18:ebf9 dev veth1a4fefe table local proto kernel metric 0 pref medium
local fe80::7c19:69ff:fe8c:dbb1 dev veth3fb88b3 table local proto kernel metric 0 pref medium
local fe80::ac09:83ff:fe14:da9d dev veth794ab31 table local proto kernel metric 0 pref medium
local fe80::b861:c7ff:fed4:b28e dev veth4f1a8dc table local proto kernel metric 0 pref medium
local fe80::e850:d0ff:fef5:9c7a dev veth9067f1b table local proto kernel metric 0 pref medium
multicast ff00::/8 dev eth0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev br-377c5a658b57 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev docker0 table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev br-705dffb3c3fc table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev tailscale0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev br-f9db474fc9e0 table local proto kernel metric 256 linkdown pref medium
multicast ff00::/8 dev br-1cc25968a922 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth4f1a8dc table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth1a4fefe table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth3fb88b3 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth794ab31 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth9067f1b table local proto kernel metric 256 pref medium
Hi! Any updates on this issue?
Hi! Sorry to bother, Any updates? Just for update, As I was browsing through other issues https://github.com/qdm12/gluetun/issues/1986 this is also the one I encounter. I just modify it likewise
Before:
<connection>
proto tcp-client
remote REDACTED_IP 443
</connection>
<connection>
proto udp
remote REDACTED_IP 443
</connection>
After:
proto udp
remote REDACTED_IP 443
And it starts to work, however not such what happended I have tried few past releases as well the same error keeps on popping up. I have tried restarting the device and reinstalling docker, still the same. Kindly help if any updates are available on this.
Sorry for the delay;
Hi! Thanks for checking, Running just one gluetun container now, and below is the output for
ip route show table all
command
That's a lot of routes! Is this from a command ran in Gluetun i.e. docker exec gluetun ip route show table all
(or a container connected to Gluetun), or is this on your host??
Just for update, As I was browsing through other issues https://github.com/qdm12/gluetun/issues/1986 this is also the one I encounter.
I'm not sure how this is relevant?
Maybe you meant the original issue #1967 from which that issue was created?
If so, the commit 75fd86962542eab693d7698c0e9e731b2f391bd1 (latest image) fixed support for tcp-client
for the custom provider.
however not such what happended I have tried few past releases as well the same error keeps on popping up.
That part of your sentence is kind of confusing 😄 So running proto udp
works on everything, but proto tcp-client
doesn't right?
Hey Hi! Thanks a lot for responsding. Just after your response, I tried checking it again to share proper error details, however I'm not sure what exactly happened. Without updating the image, docker compose or even the VPN configs, it started working again. It happened earlier that it sometimes ran and after restart stopped working, however that's not happening now. I will wait for a little more time before closing this issue ...
I ran that command on host itself.
I'm not sure how this is relevant?
I thought could be related or could even be another issue, so shared the context of that also maybe could have helped in debugging. To give more details, My ovpn configs are formmatted like below:
<connection>
proto tcp-client
remote MASKED_IP 443
</connection>
<connection>
proto udp
remote MASKED_IP 443
</connection>
#push "redirect-gateway def1"
tls-client
remote-cert-tls server
cipher AES-128-CBC
nobind
dev tun0
pull
resolv-retry infinite
#compress lzo
tun-mtu 1500
tun-mtu-extra 32
mssfix 1450
persist-tun
persist-key
verb 3
route-method exe
route-delay 2
...
And all configs in the same format used to give error on Running version latest built on 2023-12-09T17:29:04.776Z (commit 657b4b7)
ERROR [vpn] allowing VPN connection through firewall: allowing output traffic through VPN connection: command failed: "iptables --append OUTPUT -d MASKED_IP -o eth0 -p tcp-client -m tcp-client --dport 443 -j ACCEPT": iptables v1.8.9 (legacy): unknown protocol "tcp-client" specified Try iptables -h' or 'iptables --help' for more information.: exit status 2
Regarding which I think you have mentioned for fix in latest commit, earlier I used to solve this issue by renaming tls-client
to tls
but then a new error comes up.
ERROR [openvpn] Each 'connection' block must contain exactly one 'remote' directive
This was self understood to me so I just removed connection
block in the case as shown below:<connection>
proto tcp
remote MASKED_IP 443
</connection>
<connection>
proto udp
remote MASKED_IP 443
</connection>
proto udp
remote MASKED_IP 443
and this used to work, but unfortunately the last time I restarted all container I have had to raise this issue. But somehow this now seems to be fixed without any update 🤔 Not sure if there's anything pulled from remote internally. However regarding, Issue 2 mentioned above I'm still doing that like removing the connection block. Not sure why it works with Open VPN directly and not here on Gluetun. If required, I can share the Open VPN config privately.
Unfortunately started getting the same error now.
2023-12-23T15:27:13Z INFO [healthcheck] program has been unhealthy for 21s: restarting VPN (see https://github.com/qdm12/gluetun-wiki/blob/main/faq/healthcheck.md)
2023-12-23T15:27:13Z INFO [vpn] stopping
2023-12-23T15:27:13Z INFO [vpn] starting
2023-12-23T15:27:13Z INFO [firewall] allowing VPN connection...
2023-12-23T15:27:13Z INFO [openvpn] DEPRECATED OPTION: --cipher set to 'AES-128-CBC' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'AES-128-CBC' to --data-ciphers or change --cipher 'AES-128-CBC' to --data-ciphers-fallback 'AES-128-CBC' to silence this warning.
2023-12-23T15:27:13Z INFO [openvpn] OpenVPN 2.5.8 x86_64-alpine-linux-musl [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov 2 2022
2023-12-23T15:27:13Z INFO [openvpn] library versions: OpenSSL 3.1.4 24 Oct 2023, LZO 2.10
2023-12-23T15:27:13Z INFO [openvpn] TCP/UDP: Preserving recently used remote address: [AF_INET]MASKED_IP:443
2023-12-23T15:27:13Z INFO [openvpn] UDP link local: (not bound)
2023-12-23T15:27:13Z INFO [openvpn] UDP link remote: [AF_INET]MASKED_IP:443
2023-12-23T15:27:13Z INFO [openvpn] [server] Peer Connection Initiated with [AF_INET]MASKED_IP:443
2023-12-23T15:27:14Z ERROR [openvpn] Unrecognized option or missing or extra parameter(s) in [PUSH-OPTIONS]:6: block-outside-dns (2.5.8)
2023-12-23T15:27:14Z INFO [openvpn] TUN/TAP device tun0 opened
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip link set dev tun0 up
2023-12-23T15:27:14Z INFO [openvpn] /sbin/ip addr add dev tun0 local 10.11.0.38 peer 10.11.0.37
2023-12-23T15:27:16Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2023-12-23T15:27:16Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2
2023-12-23T15:27:16Z INFO [openvpn] UID set to nonrootuser
2023-12-23T15:27:16Z INFO [openvpn] Initialization Sequence Completed
2023-12-23T15:27:18Z ERROR [ip getter] Get "https://ipinfo.io/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) - retrying in 20s
Output from docker exec ovpn ip route show table all
default via 172.18.0.1 dev eth0 table 200
0.0.0.0/1 via 10.11.0.41 dev tun0
default via 172.18.0.1 dev eth0
10.11.0.1 via 10.11.0.41 dev tun0
10.11.0.41 dev tun0 proto kernel scope link src 10.11.0.42
128.0.0.0/1 via 10.11.0.41 dev tun0
134.209.156.232 via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
local 10.11.0.42 dev tun0 table local proto kernel scope host src 10.11.0.42
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
local 172.18.0.2 dev eth0 table local proto kernel scope host src 172.18.0.2
broadcast 172.18.255.255 dev eth0 table local proto kernel scope link src 172.18.0.2
There is definitely something strange going on here, I am seeing a similar error in my setup. The host network is 10.0.0.0/8, the docker network is 192.168.0.0/20 and the routes pushed from the server are
which results in this routing table in the container:
default via 192.168.6.1 dev eth0 table 200
default via 192.168.6.1 dev eth0
172.27.224.0/20 dev tun0 proto kernel scope link src 172.27.227.41
172.31.0.0/16 via 172.27.224.1 dev tun0
192.168.6.0/24 dev eth0 proto kernel scope link src 192.168.6.2
192.168.40.0/24 via 172.27.224.1 dev tun0
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
local 172.27.227.41 dev tun0 table local proto kernel scope host src 172.27.227.41
broadcast 172.27.239.255 dev tun0 table local proto kernel scope link src 172.27.227.41
local 192.168.6.2 dev eth0 table local proto kernel scope host src 192.168.6.2
broadcast 192.168.6.255 dev eth0 table local proto kernel scope link src 192.168.6.2
Yet I still get
2024-04-19T21:51:40Z INFO [openvpn] TUN/TAP device tun0 opened
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip link set dev tun0 up mtu 1500
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip link set dev tun0 up
2024-04-19T21:51:40Z INFO [openvpn] /sbin/ip addr add dev tun0 172.27.234.23/20
2024-04-19T21:51:41Z ERROR [openvpn] OpenVPN tried to add an IP route which already exists (RTNETLINK answers: File exists)
2024-04-19T21:51:41Z WARN [openvpn] Previous error details: Linux route add command failed: external program exited with error status: 2
and a non-functional connection upon startup.
Also, if I run openvpn from from the host with the same config file, the connection works fine with no routing errors.
Hi,
I've run into this also. Changing from VyprVPN to SurfShark made it go away for me. So what OpenVPN servers are the other affected users using?
I used this docker-compose.yaml
in two different directories, vpr1
and vpr2
:
version: "3"
secrets:
openvpn_user:
file: ../gluetun/vyprvpn_user.txt
openvpn_password:
file: ../gluetun/vyprvpn_password.txt
services:
vpnuser:
image: ubuntu
network_mode: "service:gluetun"
command: [ 'sleep', 'infinity' ]
restart: unless-stopped
depends_on:
gluetun:
condition: service_healthy
gluetun:
image: qmcgaw/gluetun
# container_name: gluetun
cap_add:
- NET_ADMIN
secrets:
- openvpn_user
- openvpn_password
environment:
- VPN_SERVICE_PROVIDER=vyprvpn
- SERVER_HOSTNAMES=dk1.vyprvpn.com
- HTTPPROXY=on
volumes:
# See ../gluetun/docker-compose-update.yaml
- ../gluetun/data:/gluetun
devices:
- /dev/net/tun:/dev/net/tun
restart: unless-stopped
The purpose of the vpnuser
service is that docker compose up
doesn't finish until the VPN is healthy so we can see whether it comes up or not.
So go into vpr1 and run
docker compose up -d
Now go into vpr2 and run:
for i in $(seq 10) ; do echo attempt# $i ; sudo docker compose down && sudo docker compose up -d ; done
In this second directory, gluetun
failed to come up in 9 out of 10 attempts (because of something funky with VyprVPN?).
I now signed up for surfshark (something I've been wanting to do for a while anyway), and created two more directories, ss1 and ss2 also with identical docker-compose.yaml files, but:
$ diff -u vpr1/docker-compose.yaml ss1/docker-compose.yaml
--- vpr1/docker-compose.yaml 2024-04-28 00:45:19.610612914 +0200
+++ ss1/docker-compose.yaml 2024-04-28 00:40:16.807990239 +0200
@@ -1,9 +1,9 @@
version: "3"
secrets:
openvpn_user:
- file: ../gluetun/vyprvpn_user.txt
+ file: ../gluetun/surfshark_user.txt
openvpn_password:
- file: ../gluetun/vyprvpn_password.txt
+ file: ../gluetun/surfshark_password.txt
services:
vpnuser:
image: ubuntu
@@ -22,8 +22,8 @@
- openvpn_user
- openvpn_password
environment:
- - VPN_SERVICE_PROVIDER=vyprvpn
- - SERVER_HOSTNAMES=dk1.vyprvpn.com
+ - VPN_SERVICE_PROVIDER=surfshark
+ - SERVER_HOSTNAMES=ch-zur.prod.surfshark.com
- HTTPPROXY=on
volumes:
# See ../gluetun/docker-compose-update.yaml
Doing the same thing. Starting ss1 and then letting ss2 go down and then up in a for loop succeded all 10 times.
So what OpenVPN servers are the other affected users using?
I'm using an internal company VPN. This malady also seems to be spreading. I had several tunnels set up and now more of them are failing with this issue without any changes.
I had to change (again) from SurfShark to AirVPN (because of an unrelated matter: Surfshark not supporting port forwarding), and AirVPN also doesn't have this problem at all. But VyprVPN did 100% of the time.
Hello everyone, is this problem still happening today on the latest image?
@WINOFFRG Please re-read carefully:
Try running a container (like
alpine
) connected to the Docker network where 1 gluetun instance is running and runip route show table all
?
I did not suggest to run that command within a Gluetun container, just in another simple container (like alpine
) connected to the same Docker network as Gluetun (not with its network stack being Gluetun).
Also that old bug unknown protocol "tcp-client"
has been fixed for a while now.
Is this urgent?
Yes
Host OS
Ubuntu 20.04.6 LTS
CPU arch
x86_64
VPN service provider
Custom
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2023-04-12T12:34:51.538Z (commit d4f8eea)
What's the problem 🤔
In my
docker-compose
I have multiple OpenVPN configs, And only one of them is able to connect, Is it because of-/dev/net/tun:/dev/net/tun
That maybe something gets locked, Don't have much idea of this. Please check the logs shared below.Things to note:
Share your logs (at least 10 lines)
Share your configuration