haugene / docker-transmission-openvpn

Docker container running Transmission torrent client with WebUI over an OpenVPN tunnel
GNU General Public License v3.0
4.03k stars 1.2k forks source link

Web proxy stops working after period of time #2489

Open justified999 opened 1 year ago

justified999 commented 1 year ago

Is there a pinned issue for this?

Is there an existing or similar issue/discussion for this?

Is there any comment in the documentation for this?

Is this related to a provider?

Are you using the latest release?

Have you tried using the dev branch latest?

Docker run config used

docker run \ --name transmission \ --privileged \ --label=com.centurylinklabs.watchtower.monitor-only=true \ --restart unless-stopped \ -v /volume1/Media/:/Media \ -e TRANSMISSION_DOWNLOAD_DIR=/Media/Downloads/Torrents/Movies \ -e TRANSMISSION_INCOMPLETE_DIR=/Media/Downloads/Torrents/Incomplete \ -e TRANSMISSION_WATCH_DIR=/Media/Downloads/Torrents/ToFetch \ -v /volume1/docker/Transmission/:/config \ -e OPENVPN_PROVIDER=EXPRESSVPN \ -e OPENVPN_CONFIG=my_expressvpnuk-_docklands_udp \ -e OPENVPN_USERNAME= \ -e OPENVPN_PASSWORD= \ -e PUID= \ -e PGID= \ -e LOCAL_NETWORK=192.168.0.0/24 \ -e TZ=* \ -e TRANSMISSION_WEB_UI=flood-for-transmission \ -e WEBPROXY_ENABLED=true\ --log-driver json-file \ --log-opt max-size=10m \ -p 27800:9091 \ -p 8888:8118 \ haugene/transmission-openvpn:4.2

Current Behavior

The web proxy works for routing traffic for a varying period of time after creating and starting the container. After a period of time (usually a few days) the web proxy no longer works despite the VPN still being active and downloads possible. Restarting the container does not fix the issue. Deleting and creating the container again with the same docker-run works for a few days.

Expected Behavior

Web proxy should continue as long as VPN is active

How have you tried to solve the problem?

Tried v4.3. Tried with various configs from the same VPN provider.
Tried fresh install with new config files.

Log output

settings.json.txt

HW/SW Environment

- OS: Synology DSM 7.1.1-42962 Update 3
- Docker: 20.10.3-1308

Anything else?

No response

edgd1er commented 1 year ago

I had a similar problem though, a container restart would fix the problem. the cause the proxy stopped responding is that after a period of time, docker is giving a new ip address to eth0. the privoxy config is bound to the ip address known at boot time. here is the code I added in healthcheck.sh to fix the problem. You may try to copy/paste the code into the script.

if [[ ${WEBPROXY_ENABLED} =~ YyTruetrue ]]; then
  proxy_ip=$(grep -oP "(?<=^listen-address).*$" /etc/privoxy/config | sed 's/ //g')
  cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
  if [[ ${proxy_ip} != ${cont_ip} ]]; then
    echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
    pkill privoxy
    /opt/privoxy/start.sh
  fi
Japhys commented 1 year ago

Having the same problem!

justified999 commented 1 year ago

I had a similar problem though, a container restart would fix the problem. the cause the proxy stopped responding is that after a period of time, docker is giving a new ip address to eth0. the privoxy config is bound to the ip address known at boot time. here is the code I added in healthcheck.sh to fix the problem. You may try to copy/paste the code into the script.

Thanks. Think I managed to build the image locally - downloaded the repo, edited the autoheal.sh and ran docker build. Is there any way to check if the modded script has been incorporated to the image? How can I check the IP address Privoxy is listening to or assigned?

Anyway I'll run the modded container along-side the standard container for a while and update what happens.

edgd1er commented 1 year ago

http://p.p will give you the listening port and address if privoxy is listening on the proper eth0 ip. run ip -j a show dev eth0 | jq -r .[].addr_info[].local in your container then compare with privoxy's listening address

PR #2494 was proposed as a fix for this issue.

barclayiversen commented 1 year ago

I believe I'm having the same issue. Just to be clear, I'm not able to access the web UI after the container runs for about 2 days. There are no error messages in the logs. Restarting the container resolves the issue for 2 more days. The error I get in my browser is "connection reset". This problem started happening after I updated the container image I was using a few months ago.

edgd1er commented 1 year ago

@barclayiversen ,

as I understood it, docker sometime change container's ip. This was not a problem when transmission was listening to all interfaces ( ethX and tun ) : see default settings. https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/default-settings.json#L9

This could be seen as a problem as you expose your transmission instance to all your vpn's customers. redefining listening ip to ethX's ip is the solution to be sure you are the only one to access your instance. Hence the connection issue when docker changes container's ip.

the listening ip is set when openvpn is launching transmission https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/start.sh#L28-L33

At the moment, as transmission is spawned by openvpn, i'm not sure transmission could be restarted to listen to the new ip. Restarting the container is the workaround.

PR #2494 was a commit to restart privoxy when ip changed.

Next time, "connection reset" is displayed, you may check transmission listening address ( set in settings.json ) and eth0'ip (ip -4 a)

edgd1er commented 1 year ago

an easier, may be better, also simplier would be to add an health check to compare actual eth0's ip and transmission ip, whenever there is a difference healthcheck fails and/or a signal is sent to openvpn to kill the connection.

an alternative would be to kill and restart transmission: pkill transmission-daemon && exec su --preserve-environment ${RUN_AS} -s /bin/bash -c "${transbin}/transmission-daemon -g ${TRANSMISSION_HOME} --logfile $LOGFILE" & but forwarded ports would need updating (remove previous, add new one)

barclayiversen commented 1 year ago

@edgd1er Thank you for your reply. Please excuse my ignorance but I want to ask you a question regarding your response.

@barclayiversen ,

as I understood it, docker sometime change container's ip. This was not a problem when transmission was listening to all interfaces ( ethX and tun ) : see default settings.

https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/default-settings.json#L9

This could be seen as a problem as you expose your transmission instance to all your vpn's customers. redefining listening ip to ethX's ip is the solution to be sure you are the only one to access your instance. Hence the connection issue when docker changes container's ip.

the listening ip is set when openvpn is launching transmission

https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/start.sh#L28-L33

At the moment, as transmission is spawned by openvpn, i'm not sure transmission could be restarted to listen to the new ip. Restarting the container is the workaround.

PR #2494 was a commit to restart privoxy when ip changed.

Next time, "connection reset" is displayed, you may check transmission listening address ( set in settings.json ) and eth0'ip (ip -4 a)

My understanding from your reply is that if I have "bind-address-ipv4" set to 0.0.0.0 then I shouldn't be experiencing the issue? Because in my case, bind-address-ipv4 is set to 0.0.0.0..... in my default-settings.json I see the same.

in the environment-variables.sh however, I see that TRANSMISSION_BIND_ADDRESS_IPV4 is set to a specific singular IP.

edgd1er commented 1 year ago

@edgd1er Thank you for your reply. Please excuse my ignorance but I want to ask you a question regarding your response.

@barclayiversen , as I understood it, docker sometime change container's ip. This was not a problem when transmission was listening to all interfaces ( ethX and tun ) : see default settings. https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/default-settings.json#L9

This could be seen as a problem as you expose your transmission instance to all your vpn's customers. redefining listening ip to ethX's ip is the solution to be sure you are the only one to access your instance. Hence the connection issue when docker changes container's ip. the listening ip is set when openvpn is launching transmission https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/start.sh#L28-L33

At the moment, as transmission is spawned by openvpn, i'm not sure transmission could be restarted to listen to the new ip. Restarting the container is the workaround. PR #2494 was a commit to restart privoxy when ip changed. Next time, "connection reset" is displayed, you may check transmission listening address ( set in settings.json ) and eth0'ip (ip -4 a)

My understanding from your reply is that if I have "bind-address-ipv4" set to 0.0.0.0 then I shouldn't be experiencing the issue? Because in my case, bind-address-ipv4 is set to 0.0.0.0..... in my default-settings.json I see the same.

in the environment-variables.sh however, I see that TRANSMISSION_BIND_ADDRESS_IPV4 is set to a specific singular IP.

if environment-variables.sh has the variable TRANSMISSION_BIND_ADDRESS_IPV4 set, you should be good. If you wish to check, the file to look for is : /config/transmission-home/settings.json . that 's the file read by transmission daemon. If bind-address-ipv4 is set to 0.0.0.0 then transmission will be bound both to eth0 and tun0. I've not tried as this would require another vpn customer (same provider) allowing me to try to access his transmission instance. I'm not an expert at routing, but usually when 0.0.0.0 is set for a service, that service is available on all interfaces including loopback.

barclayiversen commented 1 year ago

I see. In my settings.json the bind-address-ipv4 isn't 0.0.0.0 it's a different address which I guess must be the issue. I wonder where it's getting overwritten...

edgd1er commented 1 year ago

I see. In my settings.json the bind-address-ipv4 isn't 0.0.0.0 it's a different address which I guess must be the issue. I wonder where it's getting overwritten...

I'm sure I understand what you are trying to achieve. bind-address-ipv4 being set to eth0's address is what is expected, and is not an issue. setting it to 0.0.0.0 would be an issue, at least for me. When the code was changed to bind transmission to a specific ip, there were reports about torrents being added to instance not acknowledged by the instance's owner.

At each start settings.json is created from default-settings.json and environment variables. https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/start.sh#L64 except BIND_ADDRESS that is set here with current eth0 address. https://github.com/haugene/docker-transmission-openvpn/blob/fc003b078c3bcc7b17eca5e51e34bcbf694ef549/transmission/start.sh#L32 You could change/modify that script to have whatever you wish to be done.

but we are not discussing about the issue's subject: proxy.

barclayiversen commented 1 year ago

Understood. Thank you. I might try the dev build since it looks like PR 2494 was merged in 2 weeks ago and that appears to address the issue.

n-shay commented 1 year ago

I was having similar issues with proxy not working, restarting the container did not solve the issue. It looked to me like wrong IP was overridden into that env var, but I can't be sure. I can confirm that dev image fixed the issue.

barclayiversen commented 1 year ago

I switched over to the dev image and I'm still having an issue. However it appears to be something different now.

image

kstunger commented 1 year ago

This worked for me:

  1. Copied down start.sh from this repo to ${USERDIR}/docker/privoxy/start.sh on the host
  2. Added volume mount in docker-compose: ${USERDIR}/docker/privoxy/start.sh:/opt/privoxy/start.sh
  3. Set permissions of ${USERDIR}/docker/privoxy/start.sh to 755 on the host (chmod 755 start.sh)
  4. Edit start.sh to set adr="0.0.0.0"

I ran tests using curl -x. I was not able to use my VPN's public IP address to access the proxy. All internal IPs worked as needed.

barclayiversen commented 1 year ago

I just went the lazy route and used a scheduled task to restart the container every 24 hours.

Georgewood1 commented 1 year ago

I'm curious, I am having a similar problem with an port forwarding. When I start the container, the port is open, but after an indeterminate amount of time, the port closes. The first time I ran the container it took several hours, now it is just taking minutes

justified999 commented 1 year ago

I just went the lazy route and used a scheduled task to restart the container every 24 hours.

I couldn't even get that to work.

I'm curious, I am having a similar problem with an port forwarding. When I start the container, the port is open, but after an indeterminate amount of time, the port closes. The first time I ran the container it took several hours, now it is just taking minutes

I know it doesn't answer this issue but I eventually gave up and followed this tutorial for an alternate route https://drfrankenstein.co.uk/2022/09/26/qbittorrent-with-gluetun-vpn-in-docker-on-a-synology-nas/

barclayiversen commented 1 year ago

I just went the lazy route and used a scheduled task to restart the container every 24 hours.

I couldn't even get that to work.

I used python and make a request to localhost on the port that transmission is exposed on. If anything besides a 200 comes back I use subprocess to restart the container.

begunfx commented 1 year ago

I'm having the same issue. Transmission runs for 2-3 days then will not connect to trackers. I tried stopping/starting container but that doesn't work. I also tried adding -e TRANSMISSION_BIND_ADDRESS_IPV4=0.0.0.0 as someone suggested in this thread but no luck. I also tried manually changing the bind-address-ipv4 option in the settings.json file but if I restart the container it reverts back to a single IP address and doesn't seem to work either. I also tried changing the values and not restarting the container.

I also tried the latest and dev versions with no improvement. If I re-install transmission it does work again (for another 2-3 days) - same as others have reported here.

Any help would be greatly appreciated! See below for my setup.

I do see an error in transmission: "Scrape error: Tracker gave HTTP response code 307 (Temporary Redirect)". Not sure if that's relevant.

Also, I noticed in the settings.json file there are two bind address entries:

"bind-address-ipv4": "xxx.xxx.xxx.xxx", "rpc-bind-address": "0.0.0.0",

Notice that rpc-bind-address has the 0.0.0.0 value, but bind-address-ip4 still has a single IP address assigned (I'm assuming that's the VPN provider external IP.

System: Synology DSM version 7.1.1-42962 VPN Provider: Private Internet Access (PIA) Transmission is setup using a reverse proxy FYI

Docker run command:

docker run --cap-add=NET_ADMIN -d \ --name=transmission \ -e TRANSMISSION_BIND_ADDRESS_IPV4=0.0.0.0 \ -e TRANSMISSION_RPC_USERNAME= \ -e TRANSMISSION_RPC_PASSWORD= \ -e TRANSMISSION_RPC_AUTHENTICATION_REQUIRED=true \ -v /volume1/docker/transmission/data:/data \ -v /volume1/docker/transmission/config:/config \ -v /volume1/docker/transmission/config/openvpn-credentials.txt:/config/openvpn-credentials.txt \ -e OPENVPN_PROVIDER=PIA \ -e OPENVPN_CONFIG=us_california,us_las_vegas,us_seattle,us_west \ -e OPENVPN_USERNAME=None \ -e OPENVPN_PASSWORD=None \ -e LOCAL_NETWORK=192.168.1.0/24 \ -e TZ=America/Los_Angeles \ --log-driver json-file \ --log-opt max-size=10m \ -e OPENVPN_OPTS='--inactive 3600 --ping 10 --ping-exit 60' \ --restart always \ -p 9091:9091 \ haugene/transmission-openvpn:dev

JeeDeWee commented 1 year ago

I had a similar problem though, a container restart would fix the problem. the cause the proxy stopped responding is that after a period of time, docker is giving a new ip address to eth0. the privoxy config is bound to the ip address known at boot time. here is the code I added in healthcheck.sh to fix the problem. You may try to copy/paste the code into the script.

if [[ ${WEBPROXY_ENABLED} =~ YyTruetrue ]]; then
  proxy_ip=$(grep -oP "(?<=^listen-address).*$" /etc/privoxy/config | sed 's/ //g')
  cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
  if [[ ${proxy_ip} != ${cont_ip} ]]; then
    echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
    pkill privoxy
    `/opt/privoxy/start.sh`
  fi

I think there is a mistake in the script when comparing with the address in /etc/privoxy/config. This file also includes the port which causes privoxy to restart every minute.

You now see the this error message in docker inspect:

Privoxy error: container ip (172.20.0.4 has changed: privoxy listening to 172.20.0.4:8118, restarting privoxy.

I am not an grep expert but I believe this part of the script needs to be adjusted to extract the ip only, not the port number.

edgd1er commented 1 year ago

Here a regex to filter out port, i have no computer to test it though.

(?<=^listen-address)[^:]+

begunfx commented 1 year ago

I had a similar problem though, a container restart would fix the problem. the cause the proxy stopped responding is that after a period of time, docker is giving a new ip address to eth0. the privoxy config is bound to the ip address known at boot time. here is the code I added in healthcheck.sh to fix the problem. You may try to copy/paste the code into the script.

if [[ ${WEBPROXY_ENABLED} =~ YyTruetrue ]]; then
  proxy_ip=$(grep -oP "(?<=^listen-address).*$" /etc/privoxy/config | sed 's/ //g')
  cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
  if [[ ${proxy_ip} != ${cont_ip} ]]; then
    echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
    pkill privoxy
    /opt/privoxy/start.sh
  fi

Where would you put this in the script? Is there a specific line that this should be added to?

Thanks!

edgd1er commented 1 year ago

Already added in dev branch: https://github.com/haugene/docker-transmission-openvpn/blob/2bd89d7ae0b0fed91445b09114377b0bdf694ee9/scripts/healthcheck.sh#L58-L69

pkishino commented 1 year ago

Fixed in dev branch

JeeDeWee commented 1 year ago

Already added in dev branch:

I think the actual fix to strip the port number from post https://github.com/haugene/docker-transmission-openvpn/issues/2489#issuecomment-1536818317 is not in this code snippet. I also do not see the strip port code include in the dev branch.

pkishino commented 1 year ago

Right, my bad.. once a PR with the port strip regexp is added we can close this

JeeDeWee commented 1 year ago

Right, my bad.. once a PR with the port strip regexp is added we can close this

I am not at home now but end of next week I will create a PR for this.

JeeDeWee commented 1 year ago

I tried running with the modification but unfortunately it does not help. The problem surfaces when there is an update of docker on my Debian repository. When I update docker, the web proxy goes offline.

What I noticed is that the proxy does not restart and keeps stating the ip address is not identical.

Something else must be going on here.

2nistechworld commented 1 year ago

Hello, I fixed the issue on my end, I based the fix on the script provided by @edgd1er but I replace the IP of the container directly on the /etc/privoxy/config configuration file before the restart.

proxy_ip=$(grep -i listen-address /etc/privoxy/config | grep -v "#" | cut -d " " -f2 | cut -d ":" -f1)
cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
    if [[ ${proxy_ip} != ${cont_ip} ]]; then
      echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
      pkill privoxy || true
      sed -i "s/$proxy_ip/$cont_ip/g" /etc/privoxy/config
      /opt/privoxy/start.sh
    fi
begunfx commented 1 year ago

Sweet. Thank you!

ragebflame commented 1 year ago

Hello, I fixed the issue on my end, I based the fix on the script provided by @edgd1er but I replace the IP of the container directly on the /etc/privoxy/config configuration file before the restart.

proxy_ip=$(grep -i listen-address /etc/privoxy/config | grep -v "#" | cut -d " " -f2 | cut -d ":" -f1)
cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
    if [[ ${proxy_ip} != ${cont_ip} ]]; then
      echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
      pkill privoxy || true
      sed -i "s/$proxy_ip/$cont_ip/g" /etc/privoxy/config
      /opt/privoxy/start.sh

Thanks for this. Has resolved the issue for me.

edgd1er commented 1 year ago

Hello, I fixed the issue on my end, I based the fix on the script provided by @edgd1er but I replace the IP of the container directly on the /etc/privoxy/config configuration file before the restart.

proxy_ip=$(grep -i listen-address /etc/privoxy/config | grep -v "#" | cut -d " " -f2 | cut -d ":" -f1)
cont_ip=$(ip -j a show dev eth0 | jq -r .[].addr_info[].local)
    if [[ ${proxy_ip} != ${cont_ip} ]]; then
      echo "Privoxy error: container ip (${cont_ip} has changed: privoxy listening to ${proxy_ip}, restarting privoxy."
      pkill privoxy || true
      sed -i "s/$proxy_ip/$cont_ip/g" /etc/privoxy/config
      /opt/privoxy/start.sh
    fi

the sed should not be needed as set_port in privoxy start's script is changing the ip and port. a /g may be added to the sed command if you have several listen-address lines. though unlikely to occur as container is supposed to have only one eth interface. https://github.com/haugene/docker-transmission-openvpn/blob/42eb2ee94ef9a3ce45bdccb308f9387b36c4f6e0/privoxy/scripts/start.sh#L27-L34

mathiasflorin commented 11 months ago

I created a merge request to change the regular expression in the privoxy/scripts/start.sh script: https://github.com/haugene/docker-transmission-openvpn/pull/2673

JeeDeWee commented 11 months ago

I created a pull request (#2678) with a mix of various proposed changes.

stale[bot] commented 8 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.