Closed kthrow closed 10 months ago
Hello, your screenshot of top shows less than 7GB of memory is used. 32GB is in buffers and caches to reduce disk usage.
rTorrent does not physically allocate memory. It lets the Linux Kernel decide which files to put in buffers and caches. The process is currently using 1.5G approximately for physical allocation to run it's internal procedures.
I would recommend specifying pieces.memory.max.set = 4GB
explicitly in your .rtorrent.rc
file. This may further optimize your memory consumption. This diagnostic step will also eliminate possible avenues for a software memory leak to occur.
Hello, your screenshot of top shows less than 7GB of memory is used. 32GB is in buffers and caches to reduce disk usage.
rTorrent does not physically allocate memory. It lets the Linux Kernel decide which files to put in buffers and caches. The process is currently using 1.5G approximately for physical allocation to run it's internal procedures.
I would recommend specifying
pieces.memory.max.set = 4GB
explicitly in your.rtorrent.rc
file. This may further optimize your memory consumption. This diagnostic step will also eliminate possible avenues for a software memory leak to occur.
Thanks for the info!
I did already have pieces.memory.max.set = 6GB
set while all this was going on. Though after a bunch of searching I did also arrive at the possibility a kernel update was the issue. I was running on 6.5.0-14
during the period of high allocations, and just downgraded back to 6.2.0-39
. So far it seems like that's resolved the problem. I'm not sure how I'd determine what changed between these two versions but I guess I will just hold back updates and see if I can upgrade later.
Edit: Could be related? If I search "kernel 6.5.0 memory leak" this thread comes up: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2041668. Seems like it's bad metrics and not an actual leak..
Yes, a Linux Kernel can cause memory stability issues when running rTorrent. I'm not fimular with how the kernel layer works.
However, I understand their versioning system. You selected a version that was too "new". The last number is the number of bug fixes. For instance 6.2 has 39 bug fixes, 6.5 has 14 bug fixes.
I can see your host is Ubuntu 22.04 LTS. If the HWE (Hardware Enablement) kernel of 6.2 is not stable, it's sometimes required to go all the way back to the GA kernel of 5.19. Performance is great with newer releases, but memory stability is more important.
Support guidelines
I've found a bug and checked that ...
Description
rTorrent appears to be constantly allocating memory until reaching some limit, freeing pretty much all of it, and then reallocating constantly.
Expected behaviour
rTorrent uses a reasonable amount of memory. In the past, this has roughly been the max piece size, with some jumps to around ~10GB during large rechecking operations.
Actual behaviour
On my 64GB machine, it constantly allocates until hitting around 50GB of usage, and then resets itself.
This setup was working perfectly fine for a long time. I powered off my machine, ensuring I safely stopped rtorrent, yesterday to install some new fans. I also ran an
apt update && apt upgrade
. Once I turned it on, this new memory behaviour presented itself. This is the only container out of ~45 showing an issue.You can see from this chart the behaviour before and after the restart: Both time periods had roughly the same number of active torrents (~20), and no new downloads happened during this time period at all.
I have tried both
latest
andedge
and they have the same behaviour. I've also tried a fork I made of this image usingjesec/rtorrent
andjesec/libtorrent
and the issue persists there as well, so it's presumably unrelated to the patches made in either version.Zooming into the most recent five minutes, you can see it's allocated nearly 6GB.
Here's what it looks like when it frees -- generally it's accompanied with higher than usual CPU usage, where it drops from ~51GB to 2.3GB:
It seems to take 30-45 minutes to exhaust the memory "limit" and reset back to a reasonable number before climbing again.
Steps to reproduce
I don't really expect anyone to be able to reproduce this easily..
Docker info
Docker Compose config
Logs
Additional info
top
from inside the container: