crazy-max / docker-rtorrent-rutorrent

rTorrent and ruTorrent Docker image
MIT License
497 stars 110 forks source link

rutorrent frontend stops updating for seconds to minutes at a time #299

Closed katbyte closed 6 months ago

katbyte commented 10 months ago

Support guidelines

I've found a bug and checked that ...

Description

rutorrent frontend stops updating for seconds to minutes at a time, this is every evident in the speed section where the line goes flat from lack of updates.

there is no indication of what is going on and if rtorrent is down/up/doing in the bg

Expected behaviour

the rutorrent frontend to update at regular intervals or some indication of status/activity

Actual behaviour

the frontend stops updating, obvious indication: image

sometimes the line will just be flat for hours

Steps to reproduce

this seems the be more frequent and worse with large number of torrents (500+)

Docker info

[12:07:04] root@dd:/home/docker/config/torrents# docker info
Client: Docker Engine - Community
 Version:    24.0.7
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.11.2
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.21.0
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 14
  Running: 14
  Paused: 0
  Stopped: 0
 Images: 29
 Server Version: 24.0.7
 Storage Driver: overlay2
  Backing Filesystem: btrfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f
 runc version: v1.1.10-0-g18a0cb0
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.1.0-15-amd64
 Operating System: Debian GNU/Linux 12 (bookworm)
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 15.62GiB
 Name: dd
 ID: 4MVM:NVL3:4THB:HUX7:2LG2:INQL:OHLT:QBH7:TKDR:BXEC:JIG7:3LB7
 Docker Root Dir: /home/docker/data-root
 Debug Mode: false
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

### Docker Compose config

```yaml
[12:08:23] root@dd:/home/docker/config/torrents# docker compose config
name: torrents
services:
  geoip:
    container_name: geoip
    environment:
      DOWNLOAD_PATH: /data
      EDITION_IDS: GeoLite2-ASN,GeoLite2-City,GeoLite2-Country
      LICENSE_KEY:
      SCHEDULE: 0 0 * * 0
      TZ: '"America/Vancouver"'
    image: crazymax/geoip-updater:1.7.0
    networks:
      torrents: null
    restart: always
    volumes:
    - type: bind
      source: /home/docker/config/torrents/geoip
      target: /data
      bind:
        create_host_path: true
  gluetun:
    cap_add:
    - NET_ADMIN
    container_name: gluetun
    devices:
    - /dev/net/tun:/dev/net/tun
    environment:
      DOT: "off"
      FIREWALL_VPN_INPUT_PORTS: xx,xx
      SERVER_HOSTNAMES: us.vpn.airdns.org
      TZ: '"America/Vancouver"'
      VPN_SERVICE_PROVIDER: airvpn
      VPN_TYPE: wireguard
      WIREGUARD_ADDRESSES:/32
      WIREGUARD_PRESHARED_KEY: 
      WIREGUARD_PRIVATE_KEY: 
    hostname: gluetun
    image: qmcgaw/gluetun:v3.36.0
    networks:
      torrents: null
    ports:
    - mode: ingress
      target:xx
      published: "xx"
      protocol: udp
    - mode: ingress
      target:xx
      published: "xx"
      protocol: tcp
    - mode: ingress
      target:xx
      published: "xx"
      protocol: tcp
    - mode: ingress
      target:xx
      published: "xx"
      protocol: tcp
    - mode: ingress
      target:xx
      published: "xx"
      protocol: tcp
    restart: always
    volumes:
    - type: bind
      source: /home/docker/config/torrents/gluetun
      target: /gluetun
      bind:
        create_host_path: true
  rutorrent:
    container_name: rutorrent
    depends_on:
      gluetun:
        condition: service_started
        required: true
    environment:
      PGID: "1000"
      PUID: "1000"
      RT_DHT_PORT: "xx"
      RT_INC_PORT: "xx"
      RUTORRENT_PORT: "xx"
      TZ: '"America/Vancouver"'
      WEBDAV_PORT: "xx"
      XMLRPC_PORT: "xx"
    image: crazymax/rtorrent-rutorrent:4.2.9-0.9.8_2-0.13.8_1
    network_mode: service:gluetun
    restart: always
    stop_grace_period: 1m0s
    ulimits:
      nofile:
        soft: 32000
        hard: 40000
      nproc: 65535
    volumes:
    - type: bind
      source: /home/docker/config/torrents/rutorrent
      target: /data
      bind:
        create_host_path: true
    - type: bind
      source: /home/docker/config/torrents/geoip
      target: /data/geoip
      bind:
        create_host_path: true
    - type: bind
      source: /home/docker/config/torrents/passwd
      target: /passwd
      bind:
        create_host_path: true
    - type: bind
      source: /mnt/torrents/in
      target: /downloads/temp
      bind:
        create_host_path: true
    - type: bind
      source: /mnt/data/torrents/done
      target: /downloads/complete
      bind:
        create_host_path: true

networks:
  torrents:
    name: torrents

### Logs

```text
https://pastebin.com/wNZrfwBD

Additional info

No response

stickz commented 10 months ago

Could you provide your .rtorrent.rc file? This has valuable information.

Also could you set RU_REMOVE_CORE_PLUGINS=rpc and restart your docker container? httprpc should work better on AMD64.

katbyte commented 10 months ago

sure thing


# Maximum and minimum number of peers to connect to per torrent
throttle.min_peers.normal.set = 1
throttle.max_peers.normal.set = 100

# Same as above but for seeding completed torrents (-1 = same as downloading)
throttle.min_peers.seed.set = 1
throttle.max_peers.seed.set = 50

scheduler.max_active.set = 500

throttle.max_downloads.global.set = 500
throttle.max_uploads.global.set   = 500

max_downloads_global = 500
max_uploads_global = 500

# Maximum number of simultanious uploads per torrent
throttle.max_uploads.set = 20

# Global upload and download rate in KiB. "0" for unlimited
throttle.global_down.max_rate.set_kb = 0
throttle.global_up.max_rate.set_kb = 0

# Enable DHT support for trackerless torrents or when all trackers are down
# May be set to "disable" (completely disable DHT), "off" (do not start DHT),
# "auto" (start and stop DHT as needed), or "on" (start DHT immediately)
dht.mode.set = auto

# Enable peer exchange (for torrents not marked private)
protocol.pex.set = yes

# Check hash for finished torrents. Might be usefull until the bug is
# fixed that causes lack of diskspace not to be properly reported
pieces.hash.on_completion.set = yes

# Set whether the client should try to connect to UDP trackers
trackers.use_udp.set = yes

# Set the max amount of memory address space used to mapping file chunks. This refers to memory mapping, not
# physical memory allocation. Default: 1GB (max_memory_usage)
# This may also be set using ulimit -m where 3/4 will be allocated to file chunks
#pieces.memory.max.set = 1GB

# Alternative calls to bind and ip that should handle dynamic ip's
#schedule2 = ip_tick,0,1800,ip=rakshasa
#schedule2 = bind_tick,0,1800,bind=rakshasa

# Encryption options, set to none (default) or any combination of the following:
# allow_incoming, try_outgoing, require, require_RC4, enable_retry, prefer_plaintext
protocol.encryption.set = allow_incoming,try_outgoing,enable_retry

# Set the umask for this process, which is applied to all files created by the program
system.umask.set = 0022

# Add a preferred filename encoding to the list
encoding.add = UTF-8

# Watch a directory for new torrents, and stop those that have been deleted
schedule2 = watch_directory, 1, 1, (cat,"load.start=",(cfg.watch),"*.torrent")
schedule2 = untied_directory, 5, 5, (cat,"stop_untied=",(cfg.watch),"*.torrent")

# Close torrents when diskspace is low
schedule2 = monitor_diskspace, 15, 60, ((close_low_diskspace,1000M))

# Move finished (no need Autotools/Automove plugin on ruTorrent)
method.insert = d.get_finished_dir, simple, "cat=$cfg.download_complete=,$d.custom1="
method.insert = d.move_to_complete, simple, "d.directory.set=$argument.1=; execute=mkdir,-p,$argument.1=; execute=mv,-u,$argument.0=,$argument.1=; d.save_full_session="
method.set_key = event.download.finished,move_complete,"d.move_to_complete=$d.data_path=,$d.get_finished_dir="

# Erase data when torrent deleted (no need erasedata plugin on ruTorrent)
#method.set_key = event.download.erased,delete_erased,"execute=rm,-rf,--,$d.data_path="

# Adding public DHT servers for easy bootstrapping
schedule2 = dht_node_1, 5, 0, "dht.add_node=router.utorrent.com:6881"
schedule2 = dht_node_2, 5, 0, "dht.add_node=dht.transmissionbt.com:6881"
schedule2 = dht_node_3, 5, 0, "dht.add_node=router.bitcomet.com:6881"
schedule2 = dht_node_4, 5, 0, "dht.add_node=dht.aelitis.com:6881"

i believe it is mostly been left to the default options i first set this up a while back now, not sure what options can or should be increased for a server thats not resource constrained (VM on hypervisor with 512gb ram and 64 cores)

stickz commented 10 months ago

Could you add the following settings to your .rtorrent.rc file? This will bring you up to date with current defaults.

network.xmlrpc.size_limit.set = 16M

# Configure session saving interval to balance disk usage and torrent information accuracy
schedule2 = session_save, 1200, 3600, ((session.save))

# Save torrents immediately to prevent losing them between session saving intervals
method.set_key = event.download.inserted, 2_save_session, ((d.save_full_session))

# Configure whether to delay tracker announces at startup
trackers.delay_scrape = yes

I need to know if this impacts the frequency of your problem.

  1. This will increase session saving intervals from 20M to 1H.
  2. This will prevent torrent files from being lost in between session saving intervals.
  3. This will increase the amount of information that can be sent to ruTorrent at once.
  4. This will allow you to configure the tracker delay scrape feature. It's on by default for good reason.
stickz commented 10 months ago

If you're using UDP trackers, sight tight. I just submitted PR #303. This is a known issue with rTorrent.

katbyte commented 10 months ago

I applied both changes to both rtorrent containers i run and the one with ~300 torrents is still showing the issue, it seems no better:

image

where the one with 16 doesn't exhibit it anywhere near as much

image

It also went from downloading many torrents at a few mb/s to now nearly every one is inactive/error. I've noticed many state Tracker: [unable to connect to UDP tracker] or Tracker: [Timed out] - would this be fixed or at least improved by #303?

stickz commented 10 months ago

It also went from downloading many torrents at a few mb/s to now nearly every one is inactive/error. I've noticed many state Tracker: [unable to connect to UDP tracker] or Tracker: [Timed out] - would this be fixed or at least improved by #303?

Yes, #303 should significantly improve your experience with UDP trackers. There should be no more speed drops, with the exception of the 1 hour session saving intervals. I have tested this patch with approximately ~3000 torrents.

stickz commented 10 months ago

Hello @katbyte, #303 is now merged and available on docker edge. It can be pulled using the following command. docker pull crazymax/rtorrent-rutorrent:edge

i believe it is mostly been left to the default options i first set this up a while back now, not sure what options can or should be increased for a server thats not resource constrained (VM on hypervisor with 512gb ram and 64 cores)

rTorrent has very low CPU overhead. You will find that RAM is most important. 16GB of RAM is a great starting point. Your capabilities will increase every time you double it. Any higher than 64GB of RAM would exceed the standard gigabit use case.

katbyte commented 10 months ago

Thanks @stickz - i will give this a try soon.

Maybe it is the UDP trackers refusing to download but i threw on a couple 100 torrents yesterday and everything just stopped downloading until i got the number back under 400 by removing completed ones 😞 i don't think its ram as the torrent process was sitting at around ~400mb and the system had 13gb free.

stickz commented 10 months ago

Yes the main issue you're experiencing is the UDP trackers. They are not resolving the domain names properly.

I would keep in mind though, rTorrent lets the Linux Kernel manage your memory. The more you have available for buffers and caches is the better your seeding throughput will be. Other factors come into play as well like torrent chunk size/count.

I'm working on getting the throttle plugin for ruTorrent fixed and more performance patches pushed for 10 gigabit speeds. Right now, you should be able to do about 1.6 gigabits both ways if your connection supports it. I'm at 2.5 gigabits full duplex currently.

katbyte commented 10 months ago

Well i can safely say the latest patch has helped a bunch! a couple hundred "error" torrents fixed themselves and now i'm down to only ~40 error - is there anything that can be done to help with/resolve the remaining UDP tracker errors?

and that's great to hear about the future performance fixes, i have 3gb symmetrical so would be nice to saturate it. I'll add a bunch back and see how it handles ~1000 torrents next

stickz commented 10 months ago

I don't know why these errors are occurring with UDP trackers. I find it kind of funny because sometimes I can restart the torrent and the missed trackers will announce. It's progress just being able to use the torrent client with UDP trackers.

You should be able to disable the throttle and extratio plugin from the ruTorrent web portal. This will allow you to saturate your 3gbit connection. The biggest obstacle will be disk resources and the amount of memory available for buffers and caches.

Minus the bugs though, rTorrent dominates when it comes to performance. It's not uncommon to use 20% of a single thread and 500MB of RAM while seeding at 1gbit. The latest curl and gcc version on this docker container should contribute to that.