Open Gylesie opened 1 year ago
Exactly the same is happening to me as well. The workaround @Gylesie mentioned works for me too, but unfortunately it is not too nice when one wants to rely on the raspberry just working without needing any input.
Maybe my docker-compose.yml
will help with debugging/reproducing the error:
version: "3"
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER=mullvad
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=<redacted>
- WIREGUARD_ADDRESSES=<redacted>
- SERVER_CITIES=<redacted>
- FIREWALL_VPN_INPUT_PORTS=<redacted> # mullvad forwarded port
- PUID=1000
- PGID=1000
ports:
- 8080:8080 # qbittorrent webgui
- <redacted>:<redacted> # mullvad forwarded port
- <redacted>:<redacted>/udp # mullvad forwarded port
restart: unless-stopped
qbittorrent:
image: lscr.io/linuxserver/qbittorrent:latest
container_name: qbittorrent
network_mode: "service:gluetun"
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
volumes:
- <redacted>:/config
- <redacted>:/downloads
depends_on:
gluetun:
condition: service_healthy
restart: unless-stopped
EDIT: The same is happening with deluge too.
EDIT 2: Doesn't seem to happen with transmission
Chiming in that I have the same issue with qbittorrent and gluetun with the hotio image for qbittorrent. @Gylesie's workaround is okay but troublesome when it happens at night.
It might be because there is a listener going through the tunnel, but gluetun destroys that tunnel on an internal vpn restart and re-creates it.
I had the same issue with the http client fetching version info/public ip info from within gluetun, and the fix was to close 'idle connections' for the http client when the tunnel is up again
A bit weird though, since a server (listener) should still work across vpn restarts (it does work with i.e. the shadowsocks server). Also strange it works with Transmission. But from what you said
saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable
Doing this restarts the listener which is why it works again I would say.
I don't think I can really do something from within Gluetun, you could perhaps have some script reading the logs of Gluetun and restart qbittorrent when a vpn restarts occurs. Not ideal but I cannot think of something better really for now.
Hmm, that's unfortunate. Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
@qdm12 When the tunnel gets destroyed, does that mean that also the network interface gets destroyed and recreated afterwards?
Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.
In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.
#!/bin/bash
# Gluetun monitoring script by Gylesie. More info:
# https://github.com/qdm12/gluetun/issues/1407
######### Config:
gluetun_container_id="gluetun"
qbittorrent_container_id="qbittorrent"
timeout="60"
docker="/usr/bin/docker"
#################################################
log() {
echo "$(date) [INFO] $1"
}
# Wait for the container to be running
while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do
log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..."
sleep "$timeout"
done
# store the start time of the script
start_time=$(date +%s)
# stream the logs and process new lines only
"$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do
# get the timestamp of the log line
log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s)
# check if the log line was generated after the script started
if [[ "$log_time" -ge "$start_time" ]]; then
# Check if vpn was restarted
if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then
# Check if qbittorrent container is running
if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then
log "Restarting qbittorrent!"
"$docker" restart "$qbittorrent_container_id"
else
log "qBittorrent container($qbittorrent_container_id) is not running! Passing..."
fi
fi
fi
done
Are you interested in implementing a way to define a custom script after the VPN gets restarted? That would be kinda useful in situations like this.
yes and no, because this script would likely have to run on the host outside the gluetun container. We could eventually as an option add capabilities for Gluetun to do Docker host operations by bind mounting the docker socket, but that's kinda risky security wise (although it already runs as root + NET_ADMIN capabilities, so maybe why not). Anyway the backlog of more pressing issues is already thick, but let's keep this opened, it would be interesting to explore this more.
I'd imagine it would be possible to have some environment variables for Gluetun which specify the address, port username and password of your qBittorrent instance, then Gluetun could use the qBittorrent web API to change the port and then back whenever the tunnel is restarted. This wouldn't require any special Docker permissions. Obviously not the cleanest solution, however a solution nonetheless.
@Eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!
@Eiqnepm I wasn't aware of such web API can you create a separate issue for this? Definitely something doable!
The API is documented here, I went ahead and created the new issue https://github.com/qdm12/gluetun/issues/1441#issue-1612862391, thanks a bunch for the quick response!
I've gone ahead and made a container portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.
Environment variables | Variable | Default | Description |
---|---|---|---|
QBITTORRENT_PORT |
6881 |
qBittorrent incoming connection port | |
QBITTORRENT_WEBUI_PORT |
8080 |
Port of the qBittorrent WebUI | |
QBITTORRENT_WEBUI_SCHEME |
http |
Scheme of the qBittorrent WebUI | |
QBITTORRENT_USERNAME |
admin |
qBittorrent WebUI username | |
QBITTORRENT_PASSWORD |
adminadmin |
qBittorrent WebUI password | |
TIMEOUT |
300 |
Time in seconds between each port check | |
DIAL_TIMEOUT |
5 |
Time in seconds before the port check is considered incomplete |
I've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server.
For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed.
I recently switched from linuxserver/transmission to linuxserver/qbittorrent and noticed that qbittorrent (working inside the gluetun docker network) stops working after some time. I have been suspecting that is due because gluetun kind of restarts itself for some reason. I am glad to see I am not the only one who has noticed this issue.
The extra container solution is nice but not ideal. I think I will revert to transmission until a proper solution is found out but really appreciate all your efforts. Will keep subscribed for updates.
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.
Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.
Is the default setting of 6 seconds too sensitive?
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the
HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.Is the default setting of 6 seconds too sensitive?
My pleasure!
After reading the wiki, it seems the healthcheck was primarily created due to the unreliability of OpenVPN connections. Considering I'm using WireGuard which is stateless I've just decided to completely disable the healthcheck feature and see how that goes. With my current knowledge, barring my VPN provider itself going offline, I can't think of a reason why my connection would be interrupted (I guess we'll find out).
While the healthcheck feature cannot be disabled per se, you can just set the HEALTH_TARGET_ADDRESS
to the HEALTH_SERVER_ADDRESS
which defaults to 127.0.0.1:9999
.
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible.Thank you for writing this - works great!
For others experiencing this issue, I'm wondering if it would also help to increase the
HEALTH_VPN_DURATION_INITIAL
config option. I'm seeing 6 reconnects in the last 12 hours, which seems really high.Is the default setting of 6 seconds too sensitive?
I can confirm that this fixed it for me. I set HEALTH_VPN_DURATION_INITIAL=120s
about two weeks ago and haven't had this problem since.
Comcast hiccups often in my area, so 6 seconds was definitely too aggressive for me
in qBittorent you can go into options and under advanced, and you can lock the network interface to tun0
. this fixed the heath check disconnect/reconnect issue for me months ago as it's an issue with qbit not handling reconnects correctly. I will still probably set the HEALTH_VPN_DURATION_INITIAL=120s
just because I hate seeing a bunch of reconnects in the logs.
Also, someone just posted a bug that tun0 disappeared after the last update, but it hasn't been verified yet.
I can also confirm this. I was having this problem regularly, but locking the network interface to tun0
in qBittorent has also solved it for me.
any chance are you on the latest version and not having the tun0
missing bug? someone pulled yesterday and said the lost it, but they are also having openvpn cert issues, so it's possibly not a valid bug, but a symptom of a different one
I was running 3.32. I've updated to 3.33 and do not have any issues with tun0
. Or are you referring to later git commits? I'm on a Synology NAS (DSM7) as well, but Wireguard to Mulvad. So far everything is fine. I'll keep an eye on the public port issue as ever, but so far tun0
is present and still bound in qBittorrent as expected.
In the meantime, feel free to use this script I made, it's not perfect but good enough. Keep it running the whole time on the host system.
#!/bin/bash # Gluetun monitoring script by Gylesie. More info: # https://github.com/qdm12/gluetun/issues/1407 ######### Config: gluetun_container_id="gluetun" qbittorrent_container_id="qbittorrent" timeout="60" docker="/usr/bin/docker" ################################################# log() { echo "$(date) [INFO] $1" } # Wait for the container to be running while ! "$docker" inspect "$gluetun_container_id" | jq -e '.[0].State.Running' > /dev/null; do log "Waiting for the container($gluetun_container_id) to be up and running! Sleeping for $timeout seconds..." sleep "$timeout" done # store the start time of the script start_time=$(date +%s) # stream the logs and process new lines only "$docker" logs -t -f "$gluetun_container_id" 2>&1 | while read line; do # get the timestamp of the log line log_time=$(date -d "$(echo "$line" | cut -d ' ' -f1)" +%s) # check if the log line was generated after the script started if [[ "$log_time" -ge "$start_time" ]]; then # Check if vpn was restarted if [[ "$line" =~ "[wireguard] Wireguard is up" ]]; then # Check if qbittorrent container is running if "$docker" inspect "$qbittorrent_container_id" | jq -e '.[0].State.Running' > /dev/null; then log "Restarting qbittorrent!" "$docker" restart "$qbittorrent_container_id" else log "qBittorrent container($qbittorrent_container_id) is not running! Passing..." fi fi fi done
I tested this script with an echo instead of restart before actually enabling, and if your gluetun has been running a while and already restarted a few times, it will restart qb just as many times in rapid sequence. I think I will try the longer timeout for the gluetun healthcheck first to avoid the internal reconnects
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
The best workaround for now is to use the libtorrentv1 version of qbittorrent. Or switch to transmission. It's an issue with libttorrentv2.
Switched over to this recently and started seeing this daily (scheduled VPN reconnect). Glad it's already been reported but hoping for an integrated solution.
AirVPN Wireguard here. Same solutions seem to work (restarting container) however I would like to avoid having to do that.
Is an official solution possible? @qdm12
If restarting the container is undesirable, you should use https://github.com/qdm12/gluetun/issues/1407#issuecomment-1461582887.
@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?
@Eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?
@ksurl Sounds like a downgrade best avoided. Is there a bug reference for the libtorrentv2 issue?
@Eiqnepm Nifty but requires another container, and isn't on the UNRAID app portal. Looking for an official solution within this container. Can you merge the solution with a pull request here?
I found no other functionality changes with v1. Does unraid not let you use any image from docker hub? You could accomplish the same thing with a cron script to poke the api.
and isn't on the UNRAID app portal
Under apps and then setting, enable additional search results from dockerHub.
The container is very lightweight. It could be implimented into Gluetun, I even made an issue upon request https://github.com/qdm12/gluetun/issues/1441#issue-1612862391, however I don't currently understand the inner workings of Gluetun and don't have the ability to implement the feature myself at this time.
If the maintainer decides this is an issue that Gluetun should resolve first hand, it should not be a very daunting task, considering I managed to get it done with just over two-hundred lines of Go.
If this is a libtorrent issue then a bug should be opened there. I don't think gluetun should add a fix for a third-party issue that already has a simple container workaround.
and isn't on the UNRAID app portal
Under apps and then setting, enable additional search results from dockerHub.
Cool that there is that option however I do not see it.
As it happens... the issue sort of just went away on it's own apparently. There were several days I needed to restart the container but after a recent Gluetun update, the issue seems to have gone away.
Here's how I handle restarting dependent dockers when Gluetun restarts: https://gist.github.com/Snuffy2/1d49250df3a5c8fdb3a24d486df92015
I've gone ahead and made a container
portcheck
purely to monitor the incoming port status and automatically change the port and then back if it's inaccessible. Docker Compose exampleEnvironment variables Variable Default Description
QBITTORRENT_PORT
6881
qBittorrent incoming connection portQBITTORRENT_WEBUI_PORT
8080
Port of the qBittorrent WebUIQBITTORRENT_WEBUI_SCHEME
http
Scheme of the qBittorrent WebUIQBITTORRENT_USERNAME
admin
qBittorrent WebUI usernameQBITTORRENT_PASSWORD
adminadmin
qBittorrent WebUI passwordTIMEOUT
300
Time in seconds between each port checkDIAL_TIMEOUT
5
Time in seconds before the port check is considered incompleteI've just updated the container to not rely on the Gluetun HTTP control server for the public IP address of the VPN connection, it now uses the outbound address from within the Gluetun service network to check the qBittorrent incoming port, this also has the added benefit of not needing to query the qBittorrent incoming port from the public IP address of your server.
For anyone that was using this before I made the change, make sure to run the container inside of the Gluetun service network and update the environment variables which have changed.
@eiqnepm I am bit confused by the portcheck. Does portcheck change the qbt port to a random one and then after sometime change the port back to the original one (the one configured with port forwarding)?
@eiqnepm I am bit confused by the portcheck. Does portcheck change the qbt port to a random one and then after sometime change the port back to the original one (the one configured with port forwarding)?
It checks if the port is currently accessible using the local address of the tunnel, if it is not accessible it will change the port to zero, basically disabling port forwarding for a brief moment, and then it will set the port back to what is set using the QBITTORRENT_PORT
environment variable.
Got it. Thanks @eiqnepm
@eiqnepm I am having this issue with qbittorrent but also other containers linked to gluetun (radarr, sonarr, bazarr, jackett), all of them become inaccesible after gluetun restarts the connection, could your portcheck container work with other containers apart from qbittorrent?
FYI when the healthcheck fails and the VPN gets restarted, now port forwarding should re-trigger properly (it was bugged) on the latest image + future release v3.36.0 with commit 71201411f47f1f558290444c7921d82edce9c728
Qbittorrent might still fail to see the port changed, see the last few comments in https://github.com/qdm12/gluetun/issues/1797 - essentially it's not aware of the tunnel being changed and its pool of connections no longer work, that's why changing the port through its API fixes it.
FYI when the healthcheck fails and the VPN gets restarted, now port forwarding should re-trigger properly (it was bugged) on the latest image + future release v3.36.0 with commit 7120141
Qbittorrent might still fail to see the port changed, see the last few comments in #1797 - essentially it's not aware of the tunnel being changed and its pool of connections no longer work, that's why changing the port through its API fixes it.
To confirm my understanding, does that mean (if using qBit) that we need to continue running the portcheck
docker image to flip ports? Or should this be fixed now?
Read through the 3 different issues mentioned in this thread and still unclear to me.
Thanks - I really haven't had any issues on this since I added portcheck
to the stack, but it would be nice to get rid of it at some point.
I'm still having issues with qbittorrent + gluetun, and portcheck sorta kinda works around it, but sometimes things still go awry and I haven't had the time to figure out why.
I double checked my containers are up to date, and still saw this issue occur when portcheck was off/stopped. So I don't think the changes mentioned above fix this specific problem. Still need portcheck.
Is there a good solution for deluge?
Is there a good solution for deluge?
@qdm12 Anything? Deluge is still not aware when Gluetun reconnects to AirVPN and I lose the forwarded port until I restart Deluge.
Is there a good solution for deluge?
It would most likely be possible for me to add support for Deluge to portcheck
. After a quick network inspection it does seem Deluge does things in a slightly more complicated way.
Unfortunately I haven't used port forwarding since it was removed from Mullvad, so I would be unable to test if it actually works with Deluge.
I think I could set you up to connect with my AirVPN if it would help with this?
The last week or two my stack hasn't lost connection (at least I haven't noticed, I was waiting for it to do so to try to figure out the best to set up a health check) but it would be good to reliably solve it
I've created a dev branch to add Deluge support, it is completely untested portcheck:dev
.
I think I could set you up to connect with my AirVPN if it would help with this?
It would be nice to test it myself to fix any issues if you're willing to let me borrow one of your connections.
ah I was out and about when I saw your comment and didn't realize it was simply for portcheck. I figure it would be better to implement this as a healthcheck to the container? It's what my plan was for when I lose the port again in my setup.
I figure it would be better to implement this as a healthcheck to the container?
The consensus seems to be that because this is not necessarily an issue with Gluten, rather libtorrent, it should not be directly tackled by Gluten.
portcheck
is written in Go and runs on Alpine, so it has a very low footprint. It is currently the only way I know of to open the ports back up automatically without restarting the container itself.
Could Gluetun just get an option to fully restart whenever the connection goes down? That would resolve the problem in a roundabout way. When Gluetun restarts, docker restarts all containers that use its network.
Could Gluetun just get an option to fully restart whenever the connection goes down? That would resolve the problem in a roundabout way. When Gluetun restarts, docker restarts all containers that use its network.
That would be a good solution for those who don't mind the service containers restarting.
I'd imagine Gluetun would need access to /var/run/docker.sock
.
I'd imagine Gluetun would need access to
/var/run/docker.sock
.
Based on what the other person said, it would just need to end its own process, no?
I'd imagine Gluetun would need access to
/var/run/docker.sock
.Based on what the other person said, it would just need to end its own process, no?
Gluten would need to restart the container it is running in to restart the service network, otherwise the service network would remain the same.
I am not sure as to whether a Gluten process restart would fix the torrent issue as it doesn't effect the torrent client containers directly.
When Gluetun restarts, docker restarts all containers that use its network.
When Gluetun restarts, docker restarts all containers that use its network.
When the Gluten Docker container restarts all of the Docker containers using it as a service network will restart, however if Gluten was to have a persistent entry point process which merely restarted the main Gluten process all within the Gluetun Docker container it would not affect other Docker containers as the Gluten Docker network would remain the same.
Processes inside Docker containers don't have the ability to manipulate the state of the container itself OOTB.
Is this urgent?
No
Host OS
Ubuntu 22.04
CPU arch
x86_64
VPN service provider
Custom
What are you using to run the container
docker-compose
What is the version of Gluetun
Running version latest built on 2022-12-31T17:50:58.654Z (commit ea40b84)
What's the problem π€
Everything works as expected when qBittorrent and gluetun containers are freshly started. The qBittorrent is listening on the open port and it is reachable via the internet. However, when gluetun runs for a longer period of time and for some reason the VPN stops working for a brief time, trigerring gluetun's internal VPN restart, the open port in qBittorrent is no longer reachable.
What I found out was that by changing the open listening port in qBittorrent WebUI settings to some random port, saving the configuration and then immediately after that reverting the change to the original port, it starts listening and it is now once again reachable. Or just restarting the qBittorrent container without changing anything also worked.
Is there anything gluetun can do to prevent this? Is this solely qBittorrent's bug? Unfortunately, I have no idea.
Thanks!
Share your logs
Share your configuration
No response