Closed redtripleAAA closed 3 years ago
docker exec gluetun wget -qO- https://ipinfo.io
network_mode: service:gluetun
?Thanks for the update @qdm12
Answers:
I just upgraded gluetun container to the latest release and both containers are running
https://github.com/qdm12/gluetun/releases/tag/v3.15.0
Ran the command
/ # wget -qO- https://ipinfo.io
{
"ip": "172.98.92.85",
"city": "Toronto",
"region": "Ontario",
"country": "CA",
"loc": "43.7001,-79.4163",
"org": "AS46562 Performive LLC",
"postal": "M5N",
"timezone": "America/Toronto",
"readme": "https://ipinfo.io/missingauth"
}/ #
I will update when the issue happens again, soon.
gluetun already gets restarted automatically by itself.
What do you mean? Try restarting only netdata without restarting gluetun to see if that works? If it does, then it has something to do with Docker / iptables / kernel of the host./ # wget -qO- https://ipinfo.io
{
"ip": "172.98.80.185",
"city": "Virginia Beach",
"region": "Virginia",
"country": "US",
"loc": "36.8529,-75.9780",
"org": "AS46562 Performive LLC",
"postal": "23458",
"timezone": "America/New_York",
"readme": "https://ipinfo.io/missingauth"
}/ #
Note:- None of the connected containers to gluetun are accessible, restarted one of them manually and it's working now.
gluetun already gets restarted automatically by itself.
I meant, when connection is lost, it restarts, which is a good thing to re-connect to the VPNExample, of the logs when ISP is down, below:
today at 1:09 PM 2021/03/14 13:09:01 WARN Caught OS signal terminated, shutting down
today at 1:09 PM 2021/03/14 13:09:01 INFO Clearing forwarded port status file / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 1:09 PM 2021/03/14 13:09:01 ERROR remove / /volume1/docker/gluetun/config/port-forwarding/port.conf: no such file or directory
today at 1:09 PM 2021/03/14 13:09:01 WARN openvpn: context canceled: exiting loop
today at 1:09 PM 2021/03/14 13:09:01 WARN healthcheck: context canceled: shutting down server
today at 1:09 PM 2021/03/14 13:09:01 WARN http server: context canceled: shutting down
today at 1:09 PM 2021/03/14 13:09:01 WARN http server: shut down
today at 1:09 PM 2021/03/14 13:09:01 WARN openvpn: loop exited
today at 1:09 PM 2021/03/14 13:09:01 WARN healthcheck: server shut down
today at 1:09 PM 2021/03/14 13:09:01 INFO Shutdown successful
today at 1:09 PM =========================================
today at 1:09 PM ================ Gluetun ================
today at 1:09 PM =========================================
today at 1:09 PM ==== A mix of OpenVPN, DNS over TLS, ====
today at 1:09 PM ======= Shadowsocks and HTTP proxy ======
today at 1:09 PM ========= all glued up with Go ==========
today at 1:09 PM =========================================
today at 1:09 PM =========== For tunneling to ============
today at 1:09 PM ======== your favorite VPN server =======
today at 1:09 PM =========================================
today at 1:09 PM === Made with ❤️ by github.com/qdm12 ====
today at 1:09 PM =========================================
today at 1:09 PM
today at 1:09 PM Running version latest built on 2021-03-13T13:54:28Z (commit fa220f9)
today at 1:09 PM
today at 1:09 PM
today at 1:09 PM 🔧 Need help? https://github.com/qdm12/gluetun/issues/new
today at 1:09 PM 💻 Email? quentin.mcgaw@gmail.com
today at 1:09 PM ☕ Slack? Join from the Slack button on Github
today at 1:09 PM 💸 Help me? https://github.com/sponsors/qdm12
today at 1:09 PM 2021/03/14 13:09:03 INFO OpenVPN version: 2.4.10
today at 1:09 PM 2021/03/14 13:09:03 INFO Unbound version: 1.10.1
today at 1:09 PM 2021/03/14 13:09:03 INFO IPtables version: v1.8.4
today at 1:09 PM 2021/03/14 13:09:03 INFO Settings summary below:
today at 1:09 PM |--OpenVPN:
today at 1:09 PM |--Verbosity level: 1
today at 1:09 PM |--Run as root: enabled
today at 1:09 PM |--Provider:
today at 1:09 PM |--Private Internet Access settings:
today at 1:09 PM |--Network protocol: udp
today at 1:09 PM |--Regions: ca ontario
today at 1:09 PM |--Encryption preset: strong
today at 1:09 PM |--Custom port: 0
today at 1:09 PM |--Port forwarding:
today at 1:09 PM |--File path: / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 1:09 PM |--DNS:
today at 1:09 PM |--Plaintext address: 1.1.1.1
today at 1:09 PM |--DNS over TLS:
today at 1:09 PM |--Unbound:
today at 1:09 PM |--DNS over TLS providers:
today at 1:09 PM |--cloudflare
today at 1:09 PM |--Listening port: 53
today at 1:09 PM |--Access control:
today at 1:09 PM |--Allowed:
today at 1:09 PM |--0.0.0.0/0
today at 1:09 PM |--::/0
today at 1:09 PM |--Caching: enabled
today at 1:09 PM |--IPv4 resolution: enabled
today at 1:09 PM |--IPv6 resolution: disabled
today at 1:09 PM |--Verbosity level: 1/5
today at 1:09 PM |--Verbosity details level: 0/4
today at 1:09 PM |--Validation log level: 0/2
today at 1:09 PM |--Blocked hostnames:
today at 1:09 PM |--Blocked IP addresses:
today at 1:09 PM |--127.0.0.1/8
today at 1:09 PM |--10.0.0.0/8
today at 1:09 PM |--172.16.0.0/12
today at 1:09 PM |--192.168.0.0/16
today at 1:09 PM |--169.254.0.0/16
today at 1:09 PM |--::1/128
today at 1:09 PM |--fc00::/7
today at 1:09 PM |--fe80::/10
today at 1:09 PM |--::ffff:0:0/96
today at 1:09 PM |--Allowed hostnames:
today at 1:09 PM |--Block malicious: enabled
today at 1:09 PM |--Update: every 24h0m0s
today at 1:09 PM |--Firewall:
today at 1:09 PM |--System:
today at 1:09 PM |--Process user ID: 1029
today at 1:09 PM |--Process group ID: 100
today at 1:09 PM |--Timezone: america/toronto
today at 1:09 PM |--HTTP control server:
today at 1:09 PM |--Listening port: 8000
today at 1:09 PM |--Logging: enabled
today at 1:09 PM |--Public IP getter:
today at 1:09 PM |--Fetch period: 12h0m0s
today at 1:09 PM |--IP file: /tmp/gluetun/ip
today at 1:09 PM |--Github version information: enabled
I will try the setup
I think this is the issue here, as this only happens when gluetun does its health check (As mentioned in # 2 above) and restarts and then other containers lose the connection.
Question: Do you have any recommendation on how to force reboot other containers after gluetun container reboots?
Gluetun does not restart even if it loses connection. It will change to an unhealthy state though. You can see it received a termination signal before restarting:
Caught OS signal terminated, shutting down
Maybe do you have it configured to restart when unhealthy? If so, that will break things up. You could have a script restart gluetun and then the other containers connected to it if it's unhealthy, until I address #386
You are about the termination check, I misworded that. Thanks for correcting that.
Here is the stack I have for gluetun, fyi
---
version: '2.4'
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
environment:
- PUID=1029
- PGID=100
- TZ=America/Toronto
- VPNSP=private internet access
- REGION=CA Ontario
- PORT_FORWARDING=on #Complete https://github.com/qdm12/gluetun/wiki/Environment-variables
- PORT_FORWARDING_STATUS_FILE= /volume1/docker/gluetun/config/port-forwarding/port.conf
- OPENVPN_USER=################## #Change to YOUR Username
- OPENVPN_PASSWORD=############## #Change to YOUR Password
volumes:
- /volume1/docker/gluetun/config:/gluetun
ports:
- 8000:8000 #HTTP Server https://github.com/qdm12/gluetun/wiki/HTTP-Control-server#OpenVPN
- 19999:19999 #Netdata
- 666:80 #heimdal-VPN
- 4466:443 #heimdal-VPN
- 9080:9080 #QBitTorrent Web-UI
- 6881:6881 #QBitTorrent
- 6881:6881/udp #QBitTorrent
#- 9117:9117 #Jackett
#- 7878:7878 #Radarr
#- 8989:8989 #Sonarr
cap_add:
- NET_ADMIN
restart: unless-stopped
I still need to research how to restart the container and the other connected container on a health check.
You are about the termination check, I misworded that.
So to be clear the gluetun container (not openvpn inside) never restarts by itself right? It only restart when it's told to right?
Also maybe try with docker compose version '3' maybe that helps for the networking part between containers.
I still need to research how to restart the container and the other connected container on a health check.
I would advise you not to, I'll code that auto-healing in the coming days/2 weeks, it shouldn't be too hard to develop.
So to be clear the gluetun container (not openvpn inside) never restarts by itself right? It only restart when it's told to right?
I think yeah, as the docker logs not showing anything that it's restarting (Unless I do it manually), based on the Telegram notifier, it shows the following gluetun events happen from time to time when the ISP dropping, which is the termination part you mentioned above and this is normal.
Status Unhealthy for gluetun (qmcgaw/gluetun) {90de6496f080}
Started gluetun (qmcgaw/gluetun) {90de6496f080}
Status Healthy for gluetun (qmcgaw/gluetun) {90de6496f080}
I would advise you not to, I'll code that auto-healing in the coming days/2 weeks, it shouldn't be too hard to develop.
That would be great!! 🥇
@qdm12 I have been monitoring the behavior of the gluetun container, and noticed something.
It's actually restarting the container per Portainer run time and here is the logs
Exit Code:
Telegram Notifier API logs:
Hamra Services ALERT, [16.03.21 06:21]
Stopped gluetun (qmcgaw/gluetun) {90de6496f080}
Exit Code: 1
Hamra Services ALERT, [16.03.21 06:21]
Started gluetun (qmcgaw/gluetun) {90de6496f080}
Hamra Services ALERT, [16.03.21 06:21]
Status Healthy for gluetun (qmcgaw/gluetun) {90de6496f080}
gluetun container logs
today at 3:09 AM 2021/03/16 03:09:06 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
today at 5:09 AM 2021/03/16 05:09:06 INFO dns over tls: generate keytag query _ta-4a5c-4f66. NULL IN
today at 5:30 AM 2021/03/16 05:30:38 INFO http server: 404 GET wrote 41B to 172.20.0.1:53698 in 23.849µs
today at 6:20 AM 2021/03/16 06:20:56 WARN Caught OS signal terminated, shutting down
today at 6:20 AM 2021/03/16 06:20:56 INFO Clearing forwarded port status file / /volume1/docker/gluetun/config/port-forwarding/port.conf
today at 6:20 AM 2021/03/16 06:20:56 ERROR remove / /volume1/docker/gluetun/config/port-forwarding/port.conf: no such file or directory
today at 6:20 AM 2021/03/16 06:20:56 WARN http server: context canceled: shutting down
today at 6:20 AM 2021/03/16 06:20:56 WARN openvpn: context canceled: exiting loop
today at 6:20 AM 2021/03/16 06:20:56 WARN dns over tls: context canceled: exiting loop
today at 6:20 AM 2021/03/16 06:20:56 WARN http server: shut down
today at 6:20 AM 2021/03/16 06:20:56 WARN healthcheck: context canceled: shutting down server
today at 6:20 AM 2021/03/16 06:20:56 WARN healthcheck: server shut down
today at 6:20 AM 2021/03/16 06:20:56 WARN dns over tls: loop exited
today at 6:20 AM 2021/03/16 06:20:56 WARN openvpn: loop exited
today at 6:21 AM 2021/03/16 06:21:01 WARN Shutdown timed out
today at 6:21 AM =========================================
today at 6:21 AM ================ Gluetun ================
today at 6:21 AM =========================================
today at 6:21 AM ==== A mix of OpenVPN, DNS over TLS, ====
today at 6:21 AM ======= Shadowsocks and HTTP proxy ======
today at 6:21 AM ========= all glued up with Go ==========
today at 6:21 AM =========================================
today at 6:21 AM =========== For tunneling to ============
today at 6:21 AM ======== your favorite VPN server =======
today at 6:21 AM =========================================
today at 6:21 AM === Made with ❤️ by github.com/qdm12 ====
today at 6:21 AM =========================================
today at 6:21 AM
today at 6:21 AM Running version latest built on 2021-03-13T13:54:28Z (commit fa220f9)
And then the connected containers lose the connection for sure since it's main network mode container was rebooting.
And ISP wasn't down, everything network/internet and PIA were working fine.
Is that expected behavior?
Caught OS signal terminated, shutting down
highlights it's receiving a signal from the docker daemon / Portainer to terminate. So it's an external thing shutting it down. It cannot receive this signal from within really. Maybe are you running low on memory?
@qdm12 makes sense. will try to reach out to Portainer to troubleshoot the root cause of that issue.
I am not sure if it's Portainer that I have to talk with. As this issue happens between gluetun and docker daemon only.
All the other containers are fine.
Do recommend any logs to check/grab?
Maybe there is something interesting in docker inspect gluetun
on why it restarted although maybe not.
You could try running gluetun outside Portainer using docker-compose 3 in the cli, see if it gets restarted over time?
Hi there, did you find the root cause in the end? Or got some logs/healthcheck logs? Cheers
Hey @qdm12 no luck, I have been digging around with no luck, been doing manual restarted for qBittorrent manually whenever gluetun break and restarted by itself.
Maybe out of the topic, but it will now restart Openvpn from within if it gets unhealthy. You can try by pulling the latest image. You should thus disable the auto healing now.
@qdm12 I did update both official image and the testing one
Since then, I haven't got any restarts on the container level
My docker compose is the same
version: '2.4'
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
environment:
- PUID=1029
- PGID=100
- TZ=America/Toronto
- VPNSP=private internet access
- REGION=CA Ontario
- PORT_FORWARDING=on #Complete https://github.com/qdm12/gluetun/wiki/Environment-variables
- PORT_FORWARDING_STATUS_FILE=/gluetun/port-forwarding/port.conf
- OPENVPN_USER=################# #Change to YOUR Username
- OPENVPN_PASSWORD=################ #Change to YOUR Password
volumes:
- /volume1/docker/gluetun/config:/gluetun
ports:
- 8000:8000 #HTTP Server https://github.com/qdm12/gluetun/wiki/HTTP-Control-server#OpenVPN
#- 19999:19999 #Netdata
- 666:80 #heimdal-VPN
- 4466:443 #heimdal-VPN
- 9080:9080 #QBitTorrent Web-UI
- 6881:6881 #QBitTorrent
- 6881:6881/udp #QBitTorrent
#- 9117:9117 #Jackett
#- 7878:7878 #Radarr
#- 8989:8989 #Sonarr
cap_add:
- NET_ADMIN
restart: unless-stopped
Do I need to add/remove something from the environment variables from above to disable anywhere the auto-heal? I don't see anything here # https://github.com/qdm12/gluetun/wiki/Environment-variables
I think the change you did made a huge difference as a great workaround from the limitation of docker when container uses another container network and it doesn't break connectivity, since the network container doesn't restart anymore on the host level.
Another note: Is this related you think? or maybe should be another issue #
last Monday at 4:24:48 PM Running version latest built on 2021-04-19T19:54:17Z (commit fb8279f)
last Monday at 4:24:48 PM Running version latest built on 2021-04-19T19:54:17Z (commit fb8279f)
last Monday at 4:24:46 PM 2021/04/19 16:24:46 ERROR remove /gluetun/port-forwarding/port.conf: no such file or directory
last Monday at 5:06:03 PM 2021/04/19 17:06:03 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": dial tcp 10.48.110.1:19999: i/o timeout (Client.Timeout exceeded while awaiting headers)
last Monday at 8:06:34 PM 2021/04/19 20:06:34 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
last Wednesday at 9:40:52 PM 2021/04/21 21:40:52 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=
yesterday at 6:54:12 PM 2021/04/23 18:54:12 ERROR port forwarding: cannot bind port: Get "https://10.48.110.1:19999/bindPort?payload=<payload>&signature=8ItKfjFfHlBE3%2FYc%2FiUfaCospLVJZzqG5adRGvkHNHbRBI%2FpbQuny0AZmz24Qe8yUO0Axkdr0ncp6PE2xb2zAg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I see in portainer it has health label though
No that's just for the vpn server side port forwarding which didn't answer within 30 seconds, probably a problem on PIA server side I'd say. Closing the issue then thanks!
This sounds like a known issue that is identified in this video... https://youtu.be/IWj1-j2QWvo?t=398 The container for Gluetun gets a new ID on restart and the other containers using the Gluetun Container as their network interface needs to be re-associated with it. Would be easy to fix if you can automate... Get image ID for Gluetun and add it to the associated network interface(s) and restart containers
Would be easy to fix if you can automate...
Not that easy, but it's possible I believe. I'm working on it through github.com/qdm12/deunhealth
Ive had ths problem but it seemed to because the ports either 80 or 443 are port forwarded to another IP address
This is not fixed yet. Every couple of days I have to manually redeploy my *arr stacks which are connected to gluetun network.
Interestingly, this problem caught with me once I started to use deunhealth.
Before, I used some other autoheal which did not reacted instantly on "unhealthy" status, like deunhealth does using Docker events, and never had this problem. I guess gluetun restarts quickly, between two polls, and the old autoheal never caught it in unhealthy state.
In fact, in my case (just checked), gluetun restarts and achieves "INFO [healthcheck] healthy!" in less than 4 seconds, while the default polling interval for old autoheal was 5 seconds. 😊
So, @qdm12 congrats! You made both gluetun and deunhealth to function a little bit too good. 😂
Now, there's a question on how to delay restart with deunhelth to allow container time to autoheal?
@qdm12, would you be so kind to introduce a small variation to the label triggering deunhealth because vast majority of users have ISPs that change IP periodically by dropping the connection. It would be nice if it worked like this:
deunhealth.restart.on.unhealthy=5000
Where 5000 is optional desired delay in miliseconds.
@qdm12 this seems to still be an issue, happens to me pretty regularly, too. Let me know if I can provide additional info that could be of use
@qdm12 this seems to still be an issue, happens to me pretty regularly, too. Let me know if I can provide additional info that could be of use
Still happening to me as well
@qdm12 this seems to still be an issue, happens to me pretty regularly, too. Let me know if I can provide additional info that could be of use
Same issue. When I have a some heavy traffic, I loose connection from and to my docker containers behind GlueTun + ProtonVPN
Is this urgent?:
Yes
#################################################################################
Host OS
CPU arch or device name:
Intel amd64
What VPN provider are you using:
PIA
What are you using to run your container?:
Docker Compose
What is the version of the program
#################################################################################
What's the problem 🤔
Docker containers become inaccessible when using "network_mode:" via gluetun VPN container
Note: Maybe this RFE I already created for some time would fix the issue? https://github.com/qdm12/gluetun/issues/386
The only way to resolve the issue, is by manual restart for the container
#################################################################################
**log
For example, last logs for netdata:
For example, last logs for qBittorrent:
qt.network.ssl: QSslSocket::startClientEncryption: cannot start handshake on non-plain connection
#################################################################################
### Notes: I already created github bugs for those two products, but it only happens when I use the network mode with gluetun container, so I thought would be better to create the bug here.
netdata https://github.com/netdata/netdata/issues/10764
qBittorrent https://github.com/linuxserver/docker-qbittorrent/issues/105
You can find in both docker compose stack I used for each and troubleshooting steps taken.
Thanks