Novik / ruTorrent

Yet another web front-end for rTorrent
Other
2.03k stars 413 forks source link

Web interface is always timing out after some time #1569

Closed Nottt closed 5 years ago

Nottt commented 7 years ago

I'm running rutorrent in linuxserver docker. Since the beginning I have been plagued by the web interface issues. I know rtorrent is still running because my trackers still report my torrents as seeding.

rutorrent.log

And I see a few of those errors on my nginx logs :

2017/10/22 04:58:00 [error] 331#331: *22519 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 172.18.0.13, server: _, request: "POST /RPC2 HTTP/1.1", upstream: "scgi://unix:/run/php/.rtorrent.sock", host: "rutorrent"

2017/10/22 07:00:03 [error] 333#333: *782 connect() to unix:/run/php/.rtorrent.sock failed (111: Connection refused) while connecting to upstream, client: 172.18.0.13, server: _, request: "POST /RPC2 HTTP/1.1", upstream: "scgi://unix:/run/php/.rtorrent.sock:", host: "rutorrent"

I tried changing memory_limit = 256M it was 128M before.

rutorrentphp.txt rutorrent.txt nginx.txt

This is a dedicated server, and most of my resources are idle including network. I have only 85 torrents. This seems to happen if I try to add a lot of torrents/remove a lot of stuff, and randomly too.

Novik commented 7 years ago

This seems to happen if I try to add a lot of torrents/remove a lot of stuff, and randomly too.

This means - your server is a slow. When rtorrent doing something (add your files, or remove them) then he don't listen on RPC port. As result you have message "Operation timed out".

Try to use plain rtorrent console for control it.

If this is not good for you, then 1) Move ruTorrent initialization line (with call of initplugins.php) to the end of rtorrent.rc file. 2) Try to use git version of ruTorrent 3) Try to use plugin httprpc instead direct access to the RPC mount point. 4) Don't add 'a lot of torrents'. 5) Reduce number of simultaneously downloaded torrents. 6) Try to use last stable version of rtorrent

Nottt commented 7 years ago

This also happens when idle and downloading at low speeds of like 10 MB/s. This server has 64 GB of memory and 12k of CPU. It's not a slow server. I even tried to do some tuning increasing the max memory and stuff it could use but changed nothing. Then I removed that entries and still nothing... I don't care if rutorrent uses 10 GB of RAM for example, I just need stability.

I'm going to recreate the docker container, try to use mostly default settings and see what happens. If if this still happens I'll just move to deluge because I never have any kind of issues with deluge... but thanks :)

asavah commented 7 years ago

@Nottt wild guess: It's possible that your issue is due to rtorrent's IO problems https://github.com/rakshasa/rtorrent/issues/443 as your torrents quantity grows it will get worse, even on a beefy server. I wasn't getting ruTorrent timeouts, but NFS problems. Because of this I had to switch to deluge (2.0 beta).

Nottt commented 7 years ago

@asavah I'm indeed downloading to a mergerfs pool of NFS drives, but everything else runs fine, I can read and write into the mount with speeds like 70 MB/s with rclone so at worst I should be seeing some performance hit, but my issues happen even with much lower speeds than my mount could achieve.

@Novik I have recreated the docker with default settings that just work for almost everyone that uses it, and the only difference in my setup would be this attached .rc file and the fact that I download to a mergefs folder that merge CIFS mounts.

rutorrentrc.txt.

This is the same server,same container used by many people that have no issues at all. If dev wish to debug this further let me know, but as for now I'll try deluge as I need something that won't give me H&N lol

I have been indeed seeing strange stuff on nload on my eth0 port, when the only stuff running was rutorrent, speeds much higher than what I'm actually downloading/seeding so maybe it's the issue #443

Iotop also shows a extremely high disk usage, many times higher than rutorrent could ever be using with the torrents I have loaded now ( I have seen 100 M/S ) ... and this probably uses 100% of my I/O, and then this causes rutorrent timeout ?

So for now people who use cifs/nfs shares should not use rutorrent? Is there any plan on fixing this issue?

scoobynz commented 6 years ago

@Nottt if you are using mergerfs, have you confirmed that you are not using direct_io?

Search for "rtorrent" in this link; https://github.com/trapexit/mergerfs

Endda commented 5 years ago

Happens to my 8 core 32GB RAM unRAID server multiple times per day, even when nothing is being downloaded

I don't use any torrenting software, but my server does have sonarr, radarr, sabnzbd, and a few other Dockers running

Machine utilization usually sits at about 5 percent, but I keep that muximux tab open all day. No other teams time out like this.

Muximux times out at least twice a day

aauren commented 4 years ago

I know that this is a bit of an old thread, but I had this exact same problem and it turned out that the problem was just that rtorrent was running out of open files. The system wasn't under any significant load. It had lots of available memory. Even iops weren't all that high.

Looking at how rtorrent manages files, it looks like it holds every file of every torrent open for the duration of it's run. So you end up needing a significant amount of open files for it.

I increased it's user limit in /etc/security/limits.conf and then set it's max open files with network.max_open_files.set in your rtorrent.rc file. Once I tuned that appropriately (roughly 10x what I originally had it as) everything started working great.