Open malventano opened 4 years ago
I just added the environment variable he suggests. I'll let you know if it works for me...
I just added the environment variable he suggests. I'll let you know if it works for me...
Hi. Any chance this will be implemented?
I had forgotten about this thread. I added the environment variable a while back and it still disconnects and forces me to restart the container in order to get it back up and running. It would be great if it detected the disconnect and restarted, but I live with it.
I've been looking at this issue off/on:
I think it has to do with how the start scripts are spawning new processes in a "fire and forget" manner. It starts up OpenVPN, then fires off another script that expects OpenVPN to be running properly. If OpenVPN has issues, it doesn't seem to die on it's own, and even if it does, when I manually kill the process inside the container, there isn't anything that cares.
I have a fork of this repo that includes a health.sh
that detects when there is lost connectivity (a ping to google fails). I run this in Kubernetes, and it restarts the container, however upon restart it hangs because the tunnel network interface doesn't exist.
I'm not yet sure if number one and number two are related, but it sure is annoying to have to manually delete and re-create when my internet hiccups.
I don't have to delete the container, I only need to stop and restart it. That said, I've changed most of my network hardware out and haven't had a disconnect since.
I don't have to delete the container, I only need to stop and restart it. That said, I've changed most of my network hardware out and haven't had a disconnect since.
A docker stop
is the same as deleting a pod in k8s. In both cases all processes are killed and all mounts are re-established. In a k8s restart it doesn't dis and remount the file system mounts. I'm wondering if the startup scripts can't handle this... it ends up duplicating the output to resolv.conf
on restart, which could be part of the issue.
Thanks for the insight into the network equipment changes, any chance you were able to narrow it down to a single type of equipment? My cable modem and router are both new, other than that I have switches but can't see those causing issue.
My old router was an Asus RT-66U and I'm using the standard ISP provided cable modem. I know there were issues with renewing the DHCP lease from the ISP at times. I also had an old HP switch (2524 series I think). My new router is a pfSense setup and the rest or the network gear is all Unifi equipment. My WiFi is much more reliable and overall network speed is much better. As my client is wired, the WiFi has nothing to do with it. However, the speed is likely due to reliability as it's still the same clients on the network. If I was suffering from brief disconnects or packet loss, that could explain the regular dropping of the VPN connection. I'd average a day, sometimes 2, before having to restart the container.
oh, well that helps.
...that could explain the regular dropping of the VPN connection. I'd average a day, sometimes 2, before having to restart the container.
I only have issues when my connection to the internet goes south, so it's usually 1, maybe 2 times per month. I'm somewhat of an automation nerd, so having to do anything 1-2 times per month, manually, is anti-me.
You and me both. I expect it to work!
Ok, I believe I found a resolution to this for those using docker-compose.
version: "3"
services:
autoheal:
container_name: autoheal
image: willfarrell/autoheal
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- AUTOHEAL_CONTAINER_LABEL=all
restart: always
qbittorrent-vpn:
container_name: qbittorrent-vpn
image: markusmcnugen/qbittorrentvpn
privileged: true
volumes:
- /volume1/docker/qbittorrent-vpn/config:/config
- /volume1/docker/qbittorrent-vpn/downloads:/downloads
environment:
- AUTOHEAL_CONTAINER_LABEL=true
- VPN_ENABLED=yes
- VPN_USERNAME=your date here
- VPN_PASSWORD=your data here
- LAN_NETWORK=192.168.10.0/24 <-- change to your IP address
- NAME_SERVERS=8.8.8.8,8.8.4.4
ports:
- 8080:8080
- 8999:8999
restart: always
healthcheck:
test: "ifconfig | grep ^tun0 > /dev/null"
interval: 30s
timeout: 5s
retries: 3
The first container is autoheal. It watches for the container health check to fail, and then restarts the container. The second container is qbittorrent-vpn (this one). The health check looks for the status of the tunnel. If the tunnel is down, on the third failure it marks the container as unhealthy. I've tested it as best I can without waiting for it to fail on it's own, but forcing it to fail causes the container to be restarted.
Just figured another way to test it. Set VPN_ENABLED=no and it promptly failed 3 times and restarted. Maybe the health check could be built into the container once it's been verified as a working solution...
test: "ifconfig | grep ^tun0 > /dev/null"
Nice, I've my own "healthcheck" that I created, but this seems simpler. I'll steal this for my kubernetes deployment and :crossed_fingers: it works the same there. Mine detects failure, but restarts never seem to solve it, and I'm unclear why (as i've not spent time to dig in)
I rebooted the router for kicks and, of course, the VPN went down and qbittorrent stopped as it's supposed to. The health check times out and marked the container unhealthy and 20 seconds later it was pressing on with my downloads. I'm happy even if it never gets built in!
What does your health check look like? Mine just found something it couldn't handle....
Trying this again for the moment -> test: ["CMD", "curl", "-f", "http://google.com"] since the connectivity is blocked when the VPN goes down... :-)
What does your health check look like? Mine just found something it couldn't handle....
Trying this again for the moment -> test: ["CMD", "curl", "-f", "http://google.com"] since the connectivity is blocked when the VPN goes down... :-)
@MYeager1967 Did that work for you to use:
test: ["CMD", "curl", "-f", "http://google.com"]
It gets stuck with that when I unplug the modem and I have to manually restart the container to get it working again.
It works, but it sometimes doesn't come back up. I'm at a loss on how to correctly health check this thing. I was hoping that @chrisjohnson00 would get back to me with his method so I could check it out...
It works, but it sometimes doesn't come back up. I'm at a loss on how to correctly health check this thing. I was hoping that @chrisjohnson00 would get back to me with his method so I could check it out...
https://github.com/chrisjohnson00/docker-qBittorrentvpn/blob/main/qbittorrent/health.sh
It hasn't helped me the way I intended, but here you go.
Basically the same thing. Oh well, I was hoping your solution was a bit more advanced. :-) Maybe someday we'll find something that's rock solid...
It's a bit frustrating when my ISP modem goes down and the container doesn't restart and keep hanging in different errors, like:
(1)
Thu Feb 11 07:27:39 2021 RESOLVE: Cannot resolve host address: ca-toronto.privacy.network:1198 (Temporary failure in name resolution)
Thu Feb 11 07:27:39 2021 Could not determine IPv4/IPv6 protocol
Thu Feb 11 07:27:39 2021 SIGUSR1[soft,init_instance] received, process restarting
or
when you do curl within the container
(2)
root@39cab3b40964:/# curl ifconfig.io
curl: (6) Could not resolve host: ifconfig.io
(3)
Thu Feb 11 07:49:42 2021 AEAD Decrypt error: bad packet ID (may be a replay): [ #832617 ] -- see the man page entry for --no-replay and --replay-window for more info or silence this warning with --mute-replay-warnings
I wish if the container has a built-in feature that can restart itself for sure whenever it detects any kind of error.
Hey guys, I have another idea, where I started using gluetun container a separate VPN container and installed qbittorrent on another container, it helped a lot so far, and the dev asked to add a new feature built-in and he seems to be interested in implementing that.
Any status on a built in function? Could we possibly add this as a feature request?
Here's my health check from a previous script that might help.. sometimes the tunnel exists but doesn't pass through data.
ping -I tun0 -c 1 8.8.8.8 > /dev/null && VPNUP=true || VPNUP=false
For some reason, I have to format my tests like so:
test: ["CMD", "ping", "-I", "tun0", "-c", "1", "8.8.8.8", ">", "/dev/null && VPNUP=true", "||", "VPNUP=false"]
I'm not having a whole lot of luck getting your command formatted properly, but I'll have a go at it later. Posting this in case anyone has already figured it out... Tried it with each part in quotes, this is just where I left off.
I have a fork that uses this check and also checks for qbittorrent crashing and triggers the docker container to stop.. then I set it to auto restart. It also has PIA port forwarding support. Seems to run well I will submit a PR if no bugs come up.
https://github.com/Tailslide/docker-qBittorrentvpn
Docker image here:
https://hub.docker.com/repository/docker/tailslide/dockerqbittorrentvpn
@Tailslide - how's the crash checking working for you?
@chrisjohnson00 it's been running for a month or so no problems. The VPN has gone down a few times and qbittorrent crashed once. The container restarted and carried on with no intervention from me every time. It's nice not having to touch it.
@chrisjohnson00 Like @chrisjohnson00 said, it just works. So far, haven't had to do anything after setup.
@Tailslide - I pulled in your changes to my fork of this repo and tested.
How are you running your container?
I'm running mine in a kubernetes cluster. When the internet goes down, the container restarts, but I get stuck in execution of openvpn
with the following errors.
RESOLVE: Cannot resolve host address: us-california.privacy.network:1198 (Temporary failure in name resolution)
I'm unclear what the problem is at the moment, deleting the pod causes everything the be re-created and it works. I'm investigating if it's:
This is part me taking notes, and also hoping that you know more than I and this triggers some awesome brain cell activity that in the end helps me!
@chrisjohnson00 Sorry to hear it's not working for you. I'm running it off my nas using portainer and the compose file attached below. Haven't used kubernetes but reading their site sounds like it might be a CoreDNS side effect. You could try testing against the VPN server I am using: remote swiss.privacy.network 1198
also some googling on the error, people mention you can substitute in the IP address instead of the DNS name for the server might be useful for debugging. Hmm.. you could also try my NAME_SERVERS line if your is different.
Can you post a more complete log?
version: '3.4'
services:
dockerqbittorrentvpn:
image: tailslide/dockerqbittorrentvpn:latest
environment:
- VPN_ENABLED=yes
- PIA_PORT_FORWARD=yes
- PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/config
- NAME_SERVERS=8.8.8.8,8.8.4.4
- LAN_NETWORK=192.168.12.0/24
- TZ=Canada/Mountain
- DEBIAN_FRONTEND=noninteractive
ports:
- "8080:8080"
devices:
- /dev/net/tun
restart: always
cap_add:
- NET_ADMIN
volumes:
- /volume1/IncomingTV:/IncomingTV
- /volume1/video:/video
- /volume1/temp:/temp
- /volume1/torrents:/torrents
- /volume1/docker/qbittorrent:/config
networks:
- torrent
networks:
torrent:
external: true
Possibly related? https://tech.findmypast.com/k8s-dns-lookup/
My compose file contains this health check and It seems to reconnect on failure:
healthcheck:
test: ping -c 1 www.google.com || exit 1
interval: 30s
timeout: 10s
retries: 3
I'm using curl, but it seems to be getting the job done as well. Autoheal seems to be pretty good at policing health checks and restarting those that have failed.
I've observed that this container, when connected via ExpressVPN, will fail to re-establish connection after a wan link interruption. Other torrent+vpn containers support some form of auto container restart if the link fails for whatever reason. One example here: https://haugene.github.io/docker-transmission-openvpn/known-issues/
Is any type of network watchdog / restart possible with this docker build?
Thanks in advance