Open ezekieldas opened 4 months ago
@qdm12 is more or less the only maintainer of this project and works on it in his free time. Please:
...and just to add, although this was janky and didn't involve a good understanding of the 'updater', this was how I used the /v1/openvpn/status
endpoint with gluetun:v3
and earlier to reissue an ipaddr.
Although not the intended behavior (?) This was incredibly useful.
# curl http://localhost:8000/v1/updater/status
{"status":"stopped"}
# curl -X PUT localhost:8000/v1/openvpn/status -H "Content-Type: application/json" -d '{"status":"stopped"}'
{"outcome":"stopped"}
# curl -X PUT localhost:8000/v1/openvpn/status -H "Content-Type: application/json" -d '{"status":"running"}'
{"outcome":"running"}
2024-05-14T10:41:33-07:00 INFO [ip getter] Public IP address is 198.54.130.135 (United States, North Carolina, Raleigh)
2024-05-14T10:41:38-07:00 INFO [http server] 200 GET /status wrote 21B to 172.17.0.1:50462 in 80.974µs
2024-05-14T10:41:48-07:00 INFO [vpn] stopping
2024-05-14T10:41:48-07:00 DEBUG [wireguard] closing controller client...
2024-05-14T10:41:48-07:00 DEBUG [wireguard] removing IPv4 rule...
2024-05-14T10:41:48-07:00 DEBUG [wireguard] shutting down link...
2024-05-14T10:41:48-07:00 DEBUG [wireguard] deleting link...
2024-05-14T10:41:48-07:00 INFO [http server] 200 PUT /status wrote 22B to 172.17.0.1:40366 in 124.826671ms
2024-05-14T10:41:55-07:00 DEBUG [healthcheck] unhealthy: dialing: dial tcp4 1.1.1.1:443: i/o timeout
2024-05-14T10:42:03-07:00 INFO [vpn] starting
2024-05-14T10:42:03-07:00 DEBUG [wireguard] Wireguard server public key: Ow25Pdtyqbv/Y0I0myNixjJ2iljsKcH04PWvtJqbmCk=
2024-05-14T10:42:03-07:00 DEBUG [wireguard] Wireguard client private key: cCn...kg=
2024-05-14T10:42:03-07:00 DEBUG [wireguard] Wireguard pre-shared key: [not set]
2024-05-14T10:42:03-07:00 INFO [firewall] allowing VPN connection...
2024-05-14T10:42:03-07:00 DEBUG [firewall] iptables --delete OUTPUT -d 198.54.130.130 -o eth0 -p udp -m udp --dport 2049 -j ACCEPT
2024-05-14T10:42:03-07:00 DEBUG [firewall] iptables --delete OUTPUT -o tun0 -j ACCEPT
2024-05-14T10:42:03-07:00 DEBUG [firewall] ip6tables-nft --delete OUTPUT -o tun0 -j ACCEPT
2024-05-14T10:42:03-07:00 DEBUG [firewall] iptables --append OUTPUT -d 198.54.134.98 -o eth0 -p udp -m udp --dport 2049 -j ACCEPT
2024-05-14T10:42:03-07:00 DEBUG [firewall] iptables --append OUTPUT -o tun0 -j ACCEPT
2024-05-14T10:42:03-07:00 DEBUG [firewall] ip6tables-nft --append OUTPUT -o tun0 -j ACCEPT
2024-05-14T10:42:03-07:00 INFO [wireguard] Using available kernelspace implementation
2024-05-14T10:42:03-07:00 INFO [wireguard] Connecting to 198.54.134.98:2049
2024-05-14T10:42:03-07:00 INFO [wireguard] Wireguard setup is complete. Note Wireguard is a silent protocol and it may or may not work, without giving any error message. Typically i/o timeout errors indicate the Wireguard connection is not working.
2024-05-14T10:42:03-07:00 INFO [http server] 200 PUT /status wrote 22B to 172.17.0.1:57392 in 112.898698ms
2024-05-14T10:42:03-07:00 INFO [ip getter] Public IP address is 198.54.134.109 (United States, California, San Jose)
2024-05-14T10:42:04-07:00 INFO [healthcheck] healthy!
I will have a look into it, why it crashes it now.
Note, although perhaps undocumented, you can try the route /v1/vpn/status
, that works for both openvpn and wireguard.
I forgot to update this one...
Yes, the /v1/vpn/status
works perfectly! Thank you!
Is this urgent?
No
Host OS
Ubuntu 22.04.4 LTS
CPU arch
x86_64
VPN service provider
IVPN
What are you using to run the container
docker-compose
What is the version of Gluetun
v3 to latest
What's the problem 🤔
Shortly after the close of https://github.com/qdm12/gluetun/issues/2217 I found root cause. I have a wrapper/manager process which makes use of the Control Server. This was previously using the
/v1/openvpn/status
endpoint against a Wireguard configuration. While the documentation does note this is for OpenVPN this functioned fine up to v3 (or later). I've since updated to using/v1/updater/status
instead.Nonetheless, this results in crashing the Gluetun client and attempted restarts as shown in the log below as well as previous bug report.
Share your logs (at least 10 lines)