Closed ameinild closed 1 year ago
+1
for me, 28 torrents, container has only been up for 2 days and is currently using 2.97 GB out of my system's 4.00 GB RAM
I also have this problem.
Same problem. Is there a way to update the version of transmission used, please?
Use the beta branch with Transmission 4.0.0 beta-2 - this works for me.
And this command to pull:
docker pull haugene/transmission-openvpn:beta
Thanks a lot @ameinild , ~that sorted it!~ Edit: I might have spoken too fast. RAM usage rising up again, with only a single torrent. Will let it grow overnight and report. Running on a low-RAM NAS, so this could be enough of a problem to require running an alternate client -- but rare to find one nicely packaged for VPN like this (thanks @haugene !)
Use the beta branch with Transmission 4.0.0 beta-2 - this works for me.
And this command to pull:
docker pull haugene/transmission-openvpn:beta
unfortunately transmission 4.00 is banned on about a third of the private trackers that I use so this isn't an option but thank you for the suggestion.
Yeah, I know it's an issue that Transmission Beta is banned on private trackers. In this case, I would suggest instead reverting to an earlier Docker image, based on Ubuntu 20.04 and Transmission 2.9X, like 4.2 or 4,1. Or possibly the Dev branch - don't know if this fixes the issue yet? But there should be other possibilities. 😎
There are actually several Docker tags to try out:
@ameinild I'm still getting the memory issue :( I'm now at 5.43 GB after running 2 days with one single torrent :/ Weirdly enough, the host system (a Synology DSM 6) only reports 809 MB total memory used, so I'm really puzzled. Any idea what's going on please?
I have no idea - the Beta version works perfectly for me on Ubuntu. You could try rolling back to an earlier release. Else, wait for a stable version of Transmission where they fix the memory leaks that are clearly present. 😬
Weirdly, this morning RAM is back to 51MB. Go figure... 🤔
I am stopping in to say I am having memory leaks even on the new beta. I have 128 GB of ram so I didn't really notice this until recently when it also filled up all my swap.
Here is the memory after running for a few hours with < 40 torrents.
Thanks for sharing! I wonder whether we should move this convo to an issue upstream (i.e. on transmission repo)?
It's strange. It seems the memory leak issue is hitting randomly for different versions of Transmission and on different OS'es. On Ubuntu 22.04 I had no issue with Trans 2.94, huge memory leak an Trans 3.00, and no problem again on Trans 4.0-beta. This would make it very difficult to troubleshoot, I guess.. 😬
I am also using Ubuntu 22.04.
After it quickly jumped back up to over 20GB, I instituted a memory limit through Portainer and that has helped. Now it doesn't go above whatever limit I set. I am not sure if it will affect the functionality of the container though. Guess we'll see.
I also switched back from beta to latest since that didn't fix it anyway and I would rather run a stable version.
I'm running Linux Mint 20.3 with v4.3.2 of the container. I haven't tried alternate versions of Transmission, but I became aware of this issue when I noticed high swap usage on the host. After running for about a week with 17 torrents, the container was using 31.5GB of memory and another 6GB of swap. I've been using a limit through Portainer for the last several days without any major issues. I have seen its status listed as 'unhealthy' a couple times, but it resumed running normally after restarting via Portainer.
Same issue here. Im not sure what changed, it started doing this recently. The image im using was pulled 2 months ago. Either i didnt notice it until now, or something changed...
Same here. Capped the container @ 12GB (64GB system) and it ramps to 12GB super quickly. I restart the containernightly as well.
I was hoping that Transmission 4.0.0 would be our way out of this, troubled to hear that some is still experiencing issues with it :disappointed:
The release is now out :tada: https://github.com/transmission/transmission/releases/tag/4.0.0 :tada: and already starting to get whitelisted on private trackers. But if there's still memory issues then we might have to consider doing something else.
If this could get fixed upstream, or we could narrow it down to a server OS and then report it then that would be the best long term solution I guess. If not the only thing that comes to mind is changing the distro of the base image to see if that can have an effect. Before we start automatically restarting Transmission within the image or other hackery :grimacing:
The beta
tag of the image was updated with the 4.0.0 release version so :crossed_fingers:
A couple weeks ago I noticed the web interface getting unresponsive when the container was listed as unhealthy and set up a cron job to restart the container every 4 hours. Initially I tried longer intervals, but the container would go unhealthy as soon as the 1GB memory limit was reached and essentially stop working. With a 4 hour restart window I'm able to catch the container before it goes unresponsive, and it's been working great. If it would be helpful, I can adjust the restart interval and post container logs.
The latest version with Transmission 4.0.0 release still works good for me on Ubuntu 22.04 server. 👍
Saw this thread on transmission itself about high memory usage, even with 4.0; may be pertinent: https://github.com/transmission/transmission/issues/4786
Very curious to follow that thread @theythem9973. Hopefully they'll find something that can help us here as well :crossed_fingers:
But this issue was reported here when upgrading to 4.3 of this image which is using a much older build of Transmission, and we also previously ran v3.00 of Transmission under alpine without issues (tag 3.7.1). So, I'm starting to doubt the choice of Ubuntu 22.04 as a stable option :disappointed: We moved from alpine for a reason as well, so I'm not sure if we want to go back there or go Debian if Ubuntu doesn't pan out.
Before we change the base image again I'm just curious if there's a possibility to solve this by rolling forward instead. I've created a new branch for the 22.10 (kinetic) release. Anyone up for pulling the kinetic tag of the image and see if that works any better?
In addition to the kinetic
tag I now also tried rolling back to focal as the base image and installing Transmission via the ppa so that we still stay on Transmission 3.00. So you can also try using tag focal
and see if that's better.
Found this was clobbering me as well. I pulled kinetic I'll let you know how it goes. Pre-kinetic this was all transmission:
Hmm, kinetic isn't working for me:
Checking port...
transmission-vpn | Error: portTested: http error 400: Bad Request
transmission-vpn | #######################
transmission-vpn | SUCCESS
transmission-vpn | #######################
transmission-vpn | Port:
transmission-vpn | Expiration Fri Mar 3 00:00:00 EST 2023
transmission-vpn | #######################
transmission-vpn | Entering infinite while loop
transmission-vpn | Every 15 minutes, check port status
transmission-vpn | 60 day port reservation reached
transmission-vpn | Getting a new one
transmission-vpn | curl: (3) URL using bad/illegal format or missing URL
transmission-vpn | Fri Mar 3 23:28:02 EST 2023: getSignature error
transmission-vpn |
transmission-vpn | the has been a fatal_error
transmission-vpn | curl: (3) URL using bad/illegal format or missing URL
transmission-vpn | Fri Mar 3 23:28:02 EST 2023: bindPort error
transmission-vpn |
transmission-vpn | the has been a fatal_error
Trying focal. Update focal has been running download & seed for 12+ hours with zero memory increase.
I see that 4.0.1 is released with possible fix in transmission. I’ll make a new build with this on the beta branch
The latest beta with Transmission 4.0.1 is (still) working fine for me on Ubuntu 22.04 and Docker 23.0.1. 👍
I’m running latest transmission 4.0.1 since last week on two containers with 5/10 gb limits, larger container is fairly constant with 50ish seeding torrents, always around 7-8gb used die last 4 months of stats I have(since older versions as well). Second one is main dl container, carrying between 0-10 ish torrents and seldomly goes above 2gb. Running on an old Mac mini, Monterrey and latest docker for Mac.
Same problem here with the 4.0.1 version.
After restarting the container, everything back to normal. I have modified my docker-compose file to set a limit of RAM just in case.
Same for me on Synology DSM 7.1.1 and latest Docker image. Always maxing out the available memory. I have to limit the usage in Docker.
Been running Focal for almost 3 weeks and it's looking good. Below can see my memory settle down halfway through week 9.
However bizarrely the container thinks it's chewing up 10GB of mem when the total system is barely using 2GB. Maybe it's all cached and I've just never looked too closely before.
Anyway, Focal looks good for me on Ubuntu 22.04 and Docker 23.0.1.
Switched to the focal branch, no more issues w/ Ubuntu 20.04.6 LTS on ARMv71 w/ 4 GB of RAM and Docker 23.0.2.
Focal has been running great for the last few weeks for me.
tried updating focal branch with latest updates but cannot get the GitHub action to build..builds fine locally.. any suggestions?
CMake errors from the build log:
list sub-command REMOVE_ITEM requires two or more arguments
Could NOT find DEFLATE: Found unsuitable version "1.5", but required is at least "1.7" (found /usr/include)
Could NOT find CURL (missing: CURL_LIBRARY) (found suitable version "7.68.0", minimum required is "7.28.0")
Outdated build dependencies? https://github.com/transmission/transmission/issues/4142 https://github.com/microsoft/vcpkg/issues/18412
Hi everyone, I also had this problem. Pulled the focal branch and memory usage overnight appears to be stable at around 250MB with 50ish seeding torrents.
Is anyone actually seeing a leak on 5.0.2?? That uses 22.04 as base but runs transmission 4.0.3
The extent of my knowledge is..... I pulled the latest branch a week or so ago and the memory usage crashed the container within an hr. Only just got round to looking into it, so pulled the focal banch (no change in seed number etc) and it's been fine overnight.
I'll pull the latest branch again and see what happens.
I don’t recall what version :latest is but I’ve been running :5.0.2 and it works great. When you pulled latest, what version of transmission was it using?
On Wed, 26 Apr 2023 at 16:09, Sabb0 @.***> wrote:
The extent of my knowledge is..... I pulled the latest branch a week or so ago and the memory usage crashed the container within an hr. Only just got round to looking into it, so pulled the focal banch (no change in seed number etc) and it's been fine overnight.
— Reply to this email directly, view it on GitHub https://github.com/haugene/docker-transmission-openvpn/issues/2469#issuecomment-1522898788, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7OFYTWZEJ4JZPDBYXUAALXDDC3NANCNFSM6AAAAAATA4O64U . You are receiving this because you commented.Message ID: @.***>
Unfortunately, I don't have that info - not very useful I know! I will report back later once it's been running for a few hours.
I don’t recall what version :latest is but I’ve been running :5.0.2 and it works great. When you pulled latest, what version of transmission was it using?
:latest
and :5.0.2
are the same.
I'm now running :latest
(:5.0.2
), and still no issues for me. Was previously running :beta
(with Transmission 4.0.2), which also worked fine.
After a day, the latest branch is running around 600MB, so not crazy.
For some comparison, I have a version 3.3 container at 350MB. The same seeds etc. I assume the difference is due to 3.3 running alpine.
I've been using a focal for a couple weeks now. Previously using "latest" (don't know the exact version). focal has been great - it's sitting pretty at ~700 MB when previously it'd grow to upwards of 18 GB until it hits swap / crashes.
Glad to hear it @theythem9973 :ok_hand: We're on to something :smile: Are you also up for testing with the newest 5.x release? The 5.0.2
tag?
Latest version is drastically reducing memory usage. I'm running at 88Mb with 2 days running... Case closed imo
FWIW, I am running latest and still have this issue. Hitting my 4 GB limit I have set through Portainer.
latest
is a bit of a floating reference. Do you have logs showing the
revision. Or double checked that you've pulled lately? Can you change for
tag 5.0.2 just to be sure?
fre. 28. apr. 2023, 17:54 skrev Joe Eklund @.***>:
FWIW, I am running latest and still have this issue. Hitting my 4 GB limit I have set through Portainer.
— Reply to this email directly, view it on GitHub https://github.com/haugene/docker-transmission-openvpn/issues/2469#issuecomment-1527765098, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAH5ODXGNPAB5W3NZZVRCS3XDPR2DANCNFSM6AAAAAATA4O64U . You are receiving this because you were mentioned.Message ID: @.***>
I am using the tag: haugene/transmission-openvpn:latest
. I just tried to pull again and nothing changed. Portainer is reporting that the image is up to date. I poked around the logs but didn't see anything that jumped out at me and said that I was using latest. But I am fairly confident I am using https://hub.docker.com/layers/haugene/transmission-openvpn/latest/images/sha256-df0b4b4c640004ff48103d8405d0e26b42b0d3631e35399d9f9ebdde03b6837e, given that Portainer says what the container is using is the most up to date.
I swapped to 5.0.2
and now Portainer has the same image as being tagged for both 5.0.2
and latest, so it's the same image whether I change to 5.0.2
or use latest. I will leave it as 5.0.2
and monitor, but I suspect it will exhibit the same behavior since the actual image being used didn't change. Right now it's at ~600MB and every few seconds it is going up by ~30 MB.
EDIT: I looked at the logs and see Starting container with revision: 1103172c3288b7de681e2fb7f1378314f17f66cf
.
Sounds like you had the correct image all along then. So it will probably use a lot of memory now as well. What OS and version are you running, and docker version?
fre. 28. apr. 2023, 18:14 skrev Joe Eklund @.***>:
I am using the tag: haugene/transmission-openvpn:latest. I just tried to pull again and nothing changed. Portainer is reporting that the image is up to date. I poked around the logs but didn't see anything that jumped out at me and said that I was using latest. But I am fairly confident I am using https://hub.docker.com/layers/haugene/transmission-openvpn/latest/images/sha256-df0b4b4c640004ff48103d8405d0e26b42b0d3631e35399d9f9ebdde03b6837e, given that Portainer says what the container is using is the most up to date.
I swapped to 5.0.2 and now Portainer has the same image as being tagged for both 5.0.2 and latest, so it's the same image whether I change to 5.0.2 or use latest. I will leave it as 5.0.2 and monitor, but I suspect it will exhibit the same behavior since the actual image being used didn't change. Right now it's at ~600MB and every few seconds it is going up by ~30 MB.
— Reply to this email directly, view it on GitHub https://github.com/haugene/docker-transmission-openvpn/issues/2469#issuecomment-1527789820, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAH5ODXOE6V6K3GGLJK4PFDXDPUGFANCNFSM6AAAAAATA4O64U . You are receiving this because you were mentioned.Message ID: @.***>
OS is Ubuntu 22.04 LTS. Docker 23.0.3. And after restarting the container a couple hours ago it's back up to my Portainer memory limit (4GB).
Is there a pinned issue for this?
Is there an existing or similar issue/discussion for this?
Is there any comment in the documentation for this?
Is this related to a provider?
Are you using the latest release?
Have you tried using the dev branch latest?
Docker run config used
Current Behavior
The Transmission Container uses over 10 GB of memory after running for 10 days with around 25 torrents.
Expected Behavior
I expect the container to not use over 10 GB of memory when only seeding a couple of torrents at a time.
How have you tried to solve the problem?
Works without issue for version 4.2 (Ubuntu 20.04 and Transmission 2,X)
Log output
HW/SW Environment
Anything else?
There appears to be a fix in a later version of Transmission 3, mentioned on the Transmission Github:
https://github.com/transmission/transmission/issues/3077
It appears this Nightly build fixes the issue.