Open Gylesie opened 1 year ago
Docker containers live and die by their main process.
persistent entry point process
This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).
Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB.
They absolutely do by necessity simply by the fact that the container only runs as long as the main process is running.
Docker containers live and die by their main process.
persistent entry point process
This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).
You are correct, however this would break workflows for those who do not want the container to restart on actual failures.
Docker containers live and die by their main process.
persistent entry point process
This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).
You are correct, however this would break workflows for those who do not want the container to restart on actual failures.
it would simply need to be optional
Docker containers live and die by their main process.
persistent entry point process
This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).
You are correct, however this would break workflows for those who do not want the container to restart on actual failures.
it would simply need to be optional
Giving the Gluten container access to /var/run/docker.sock
could be optional and would also not break the aforementioned workflows.
Two ways to achieve the same thing, but I think having the Gluten container restart itself instead of relying on a restart policy is a more ideal solution if Gluten was going to go the container restart route to address this issue.
One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.
One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.
What I suggested was for Gluten to restart itself, say when an environment variable is enabled and the Gluten container has access to the Docker socket. This way you get the benefit of the service network restarting, which indirectly restarts all of the dependent containers and you don't have to use the always restart policy, which is undesirable for some.
I wouldn't call it complex, obviously in comparison to exiting the process it would be more "logic", however neither is challenging to implement and maintain.
Both viable suggestions. Like I said, I still believe it would be better to not break the no restart policy workflow, but that's subjective.
I don't think there's more for me to add.
@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?
I have made a restart
branch for portcheck
. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).
@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?
I have made a
restart
branch forportcheck
. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).
Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.
@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?
I have made a
restart
branch forportcheck
. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.
I could be implemented.
You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.
@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?
I have made a
restart
branch forportcheck
. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.
I could be implemented.
You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.
That's how i currently have it set up. When the qBittorent port is unreachable then both containers restart. Ill see how it works.
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.Thank you for writing this - works great! For others experiencing this issue, I'm wondering if it would also help to increase the
HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high. Is the default setting of 6 seconds too sensitive?I can confirm that this fixed it for me. I set
HEALTH_VPN_DURATION_INITIAL=120s
about two weeks ago and haven't had this problem since.Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me
Setting HEALTH_VPN_DURATION_INITIAL=120s
is what solved it for me.
Setting HEALTH_VPN_DURATION_INITIAL=120s
solved it for me as well.
I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed.
Meaning that you guys can add a healthcheck to the gluetun container:
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"]
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
I believe I fixed it by manually setting HEALTH_SERVER_ADDRESS=127.0.0.1:5921
and HTTP_CONTROL_SERVER_ADDRESS=:8456
(these are just random unused ports on my machine) as the default ports were in use. You can check if a port is in use via nc -zv localhost <port>
.
Nevermind, unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout
is back 😔
Need to do this on unraid...
I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed.
Meaning that you guys can add a healthcheck to the gluetun container:
healthcheck: test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"] interval: 1m30s timeout: 10s retries: 3 start_period: 40s
Thank you so much
This is still a problem even with HEALTH_VPN_DURATION_INITIAL=120s
qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.
What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.
This is still a problem even with
HEALTH_VPN_DURATION_INITIAL=120s
qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.
What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.
Running an outdated BitTorrent client is probably not a good idea. Are you unable to use https://github.com/eiqnepm/portcheck or the above healthcheck workaround?
What does it do (more specifically than 'check a port')? I would have a hard time without a guide on how to integrate that into UNRAID. I know very little about docker and would be more likely to break something without a stable framework to manage it.
It's fairly common, I think, to hold back on upgrading to new qBittorrent releases until they are proven 'good'.
That said, this problem does not manifest in a linux VM using the same qBittorrent version (or a newer version like 4.6.7) connected with a native WireGuard client. So it's tough to pin the blame entirely on libtorrent2 since it works fine in that environment without requiring restarts when the VPN reconnects. libtorrent2 may play a role but is not exclusively the cause of the issue as I see it.
This is still a problem even with
HEALTH_VPN_DURATION_INITIAL=120s
qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.
What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.
I'm not sure what to make of this at the moment but this port forward disconnection issue happened only the one time so far and each subsequent day after the qbit container restart, it has been fine.
I resolved this issue by connecting via OpenVPN. I will also share the code.
` volumes:
gluetun:/gluetun/custom.conf:ro
environment:
change your openvpn client ip , port , adapter name sudo nano /etc/openvpn/iptables.sh Paste the code below chmod +x /etc/openvpn/iptables.sh sudo /etc/openvpn/iptables.sh up
`
if [ "$1" = "up" ]; then iptables -A FORWARD -i tun0 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -A PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to-destination openvpn_client_ip elif [ "$1" = "down" ]; then iptables -D FORWARD -i tun0 -j ACCEPT iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -D PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to-destination openvpn_client_ip fi
`
Is this urgent?
No
Host OS
Ubuntu 22.04
CPU arch
x86_64
VPN service provider
Custom
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)
What's the problem 🤔
Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.
What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.
Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.
Thanks!
Share your logs
Share your configuration
No response