qdm12 / gluetun

VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.
https://hub.docker.com/r/qmcgaw/gluetun
MIT License
8.18k stars 376 forks source link

Bug: qBittorrent stops listening to the open port after the gluetun VPN restarts internally #1407

Open Gylesie opened 1 year ago

Gylesie commented 1 year ago

Is this urgent?

No

Host OS

Ubuntu 22.04

CPU arch

x86_64

VPN service provider

Custom

What are you using to run the container

docker-compose

What is the version of Gluetun

Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)

What's the problem 🤔

Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.

What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.

Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.

Thanks!

Share your logs

INFO [healthcheck] program has been unhealthy for 36s: restarting VPN
INFO [vpn] stopping
INFO [firewall] removing allowed port xxxxxx...
INFO [vpn] starting
INFO [firewall] allowing VPN connection...
INFO [wireguard] Using available kernelspace implementation
INFO [wireguard] Connecting to yyyyyyyyy:yyyyy
INFO [wireguard] Wireguard is up
INFO [firewall] setting allowed input port xxxxxx through interface tun0...
INFO [healthcheck] healthy!

Share your configuration

No response

xoxfaby commented 10 months ago

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB.

They absolutely do by necessity simply by the fact that the container only runs as long as the main process is running.

eiqnepm commented 10 months ago

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

xoxfaby commented 10 months ago

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

it would simply need to be optional

eiqnepm commented 10 months ago

Docker containers live and die by their main process.

persistent entry point process

This process would be under the control of gluetun, not docker. And gluetun could then have this process end, which would end the container, which would then cause docker to restart it (if it is configured to do so by the restart policy).

You are correct, however this would break workflows for those who do not want the container to restart on actual failures.

it would simply need to be optional

Giving the Gluten container access to /var/run/docker.sock could be optional and would also not break the aforementioned workflows.

Two ways to achieve the same thing, but I think having the Gluten container restart itself instead of relying on a restart policy is a more ideal solution if Gluten was going to go the container restart route to address this issue.

xoxfaby commented 10 months ago

One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.

eiqnepm commented 10 months ago

One complicated solution that needs gluetun to get extra unnecessary access to then implement more complex logic to go out and restart other containers, vs a dead simple solution that takes 2 lines of code to implement.

What I suggested was for Gluten to restart itself, say when an environment variable is enabled and the Gluten container has access to the Docker socket. This way you get the benefit of the service network restarting, which indirectly restarts all of the dependent containers and you don't have to use the always restart policy, which is undesirable for some.

I wouldn't call it complex, obviously in comparison to exiting the process it would be more "logic", however neither is challenging to implement and maintain.

Both viable suggestions. Like I said, I still believe it would be better to not break the no restart policy workflow, but that's subjective.

I don't think there's more for me to add.

eiqnepm commented 10 months ago

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Jhutjens92 commented 9 months ago

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

eiqnepm commented 9 months ago

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

I could be implemented.

You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.

Jhutjens92 commented 9 months ago

@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?

I have made a restart branch for portcheck. With this branch you can use Docker labels to select containers that will be restarted when the selected TCP port is inaccessible (example).

Is it possible to check multiple ports with one docker instance? In my case Prowlarr and qBittorent both have the issue.

I could be implemented.

You can restart multiple containers. Are you not able to just check the one port and have both containers restart? I assume that if one service is unreachable the other will be too.

That's how i currently have it set up. When the qBittorent port is unreachable then both containers restart. Ill see how it works.

fabiengagne commented 9 months ago

I've gone ahead and made a container portcheck purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.

Thank you for writing this - works great! For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high. Is the default setting of 6 seconds too sensitive?

I can confirm that this fixed it for me. I set HEALTH_VPN_DURATION_INITIAL=120s about two weeks ago and haven't had this problem since.

Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me

Setting HEALTH_VPN_DURATION_INITIAL=120s is what solved it for me.

Snoras commented 8 months ago

Setting HEALTH_VPN_DURATION_INITIAL=120s solved it for me as well.

giraffeingreen commented 6 months ago

I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed.

Meaning that you guys can add a healthcheck to the gluetun container:

 healthcheck:
    test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"]
    interval: 1m30s
    timeout: 10s
    retries: 3
    start_period: 40s
aidan-gibson commented 4 months ago

I believe I fixed it by manually setting HEALTH_SERVER_ADDRESS=127.0.0.1:5921 and HTTP_CONTROL_SERVER_ADDRESS=:8456 (these are just random unused ports on my machine) as the default ports were in use. You can check if a port is in use via nc -zv localhost <port>.

aidan-gibson commented 4 months ago

Nevermind, unhealthy: dialing: dial tcp4: lookup cloudflare.com: i/o timeout is back 😔

nebb00 commented 3 months ago

Need to do this on unraid...

CevreMuhendisi commented 2 months ago

I was searching the internet for a solution and I found https://portcheck.transmissionbt.com/4330 which return 1 if the port is open and 0 if it's closed.

Meaning that you guys can add a healthcheck to the gluetun container:

 healthcheck:
    test: ["CMD-SHELL", "wget -qO- http://portcheck.transmissionbt.com/4330 | grep -q 1 || exit 1"]
    interval: 1m30s
    timeout: 10s
    retries: 3
    start_period: 40s

Thank you so much

argonan0 commented 2 months ago

This is still a problem even with HEALTH_VPN_DURATION_INITIAL=120s

qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.

What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.

eiqnepm commented 2 months ago

This is still a problem even with HEALTH_VPN_DURATION_INITIAL=120s

qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.

What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.

Running an outdated BitTorrent client is probably not a good idea. Are you unable to use https://github.com/eiqnepm/portcheck or the above healthcheck workaround?

argonan0 commented 2 months ago

What does it do (more specifically than 'check a port')? I would have a hard time without a guide on how to integrate that into UNRAID. I know very little about docker and would be more likely to break something without a stable framework to manage it.

It's fairly common, I think, to hold back on upgrading to new qBittorrent releases until they are proven 'good'.

That said, this problem does not manifest in a linux VM using the same qBittorrent version (or a newer version like 4.6.7) connected with a native WireGuard client. So it's tough to pin the blame entirely on libtorrent2 since it works fine in that environment without requiring restarts when the VPN reconnects. libtorrent2 may play a role but is not exclusively the cause of the issue as I see it.

argonan0 commented 2 months ago

This is still a problem even with HEALTH_VPN_DURATION_INITIAL=120s

qBittorrent fails to reconnect to the forwarded port but downloads seem to still work. Seeding does not.

What is the last version of qBittorrent that uses libtorrent1? I am on 4.5.5 and it's using libtorrent2, which has mentioned seems to be a component of the issue.

I'm not sure what to make of this at the moment but this port forward disconnection issue happened only the one time so far and each subsequent day after the qbit container restart, it has been fine.

CevreMuhendisi commented 2 months ago

I resolved this issue by connecting via OpenVPN. I will also share the code.

` volumes:

change your openvpn client ip , port , adapter name sudo nano /etc/openvpn/iptables.sh Paste the code below chmod +x /etc/openvpn/iptables.sh sudo /etc/openvpn/iptables.sh up

`

!/bin/bash

if [ "$1" = "up" ]; then iptables -A FORWARD -i tun0 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -A PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to-destination openvpn_client_ip elif [ "$1" = "down" ]; then iptables -D FORWARD -i tun0 -j ACCEPT iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE iptables -t nat -D PREROUTING -i eth0 -p tcp --dport PORT -j DNAT --to-destination openvpn_client_ip fi

`