crazy-max / docker-rtorrent-rutorrent

rTorrent and ruTorrent Docker image
MIT License
457 stars 103 forks source link

Constant Segmentation Fault after loading for around 30 minutes #315

Closed JackPo closed 2 months ago

JackPo commented 5 months ago

Support guidelines

I've found a bug and checked that ...

Description

Constant segmentation fault after starting the docker container crazy_max rtorrent_rutorrent. Looked through a few recent issues and running :edge and still doesn't fix the issue (with UDP tured off). Don't know how to turn stack trace on in the docker, but the log file is nothing out of the ordinary other than lots of timeouts and it's hashing.

Seeding probably around 10K files or so ( don't remember).


[14-Jan-2024 01:17:03] NOTICE: fpm is running, pid 519

[14-Jan-2024 01:17:03] NOTICE: ready to handle connections

2024/01/14 01:17:04 [notice] 514#514: using the "epoll" event method

2024/01/14 01:17:04 [notice] 514#514: nginx/1.24.0

2024/01/14 01:17:04 [notice] 514#514: OS: Linux 5.10.60-qnap

2024/01/14 01:17:04 [notice] 514#514: getrlimit(RLIMIT_NOFILE): 32000:40000

2024/01/14 01:17:04 [notice] 514#514: start worker processes

2024/01/14 01:17:04 [notice] 514#514: start worker process 551

2024/01/14 01:17:04 [notice] 514#514: start worker process 552

2024/01/14 01:17:04 [notice] 514#514: start worker process 553

2024/01/14 01:17:04 [notice] 514#514: start worker process 554

2024/01/14 01:17:04 [notice] 514#514: start worker process 556

2024/01/14 01:17:04 [notice] 514#514: start worker process 565

2024/01/14 01:17:04 [notice] 514#514: start worker process 575

2024/01/14 01:17:04 [notice] 514#514: start worker process 586

Caught Segmentation fault, dumping stack:

Stack dump not enabled.

Expected behaviour

Not to seg fault :)

Actual behaviour

segfaulting

Steps to reproduce

1) Start Container 2) wait 30 minutes 3) Seg faults, and it will keep seg faulting every 30 minutes.

Docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  compose: Docker Compose (Docker Inc., v2.14.1-qnap1)

Server:
 Containers: 36
  Running: 24
  Paused: 0
  Stopped: 12
 Images: 560
 Server Version: 20.10.22-qnap7
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay qnet
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux kata-runtime runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 78f51771157abb6c9ed224c22013cdf09962315d
 runc version: v1.1.4-0-g5fd4c4d1
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.10.60-qnap
 Operating System: QTS 5.1.4 (20231128)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 62.72GiB
 Name: BriarwoodQNAP
 ID: UL7H:JCBV:6XJS:JKLR:JGKL:FJN5:XDPX:DTIV:MQ5Z:536K:SSZY:2I6Q
 Docker Root Dir: /share/CACHEDEV3_DATA/Container/container-station-data/lib/docker
 Debug Mode: true
  File Descriptors: 697
  Goroutines: 181
  System Time: 2024-01-14T04:56:29.323665317-05:00
  EventsListeners: 2
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
 Default Address Pools:
   Base: 172.29.0.0/16, Size: 22

Docker Compose config

No response

Logs

huge 47 mb logs that keeps failing right after 47 mbs.

Additional info

No response

stickz commented 5 months ago

There is a known crashing issue with unaligned memory access. I'm waiting for #310 to be merged. This can happen when you're hashing torrents.

JackPo commented 5 months ago

Doh, in the mean time I just purged all of my session data and it's currently restarting from blank slate adding all torrents back because I wasn't sure what was going on. Will update once this is merged.

On Sun, Jan 14, 2024 at 6:55 AM stickz @.***> wrote:

There is a known crashing issue with unaligned memory access. I'm waiting for #310 https://github.com/crazy-max/docker-rtorrent-rutorrent/pull/310 to be merged. This can happen when you're hashing torrents.

— Reply to this email directly, view it on GitHub https://github.com/crazy-max/docker-rtorrent-rutorrent/issues/315#issuecomment-1890974632, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7XDTKDVMORO7YKXFJ6O4LYOPWUPAVCNFSM6AAAAABB2AKRZWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQHE3TINRTGI . You are receiving this because you authored the thread.Message ID: @.***>

newadventure079 commented 5 months ago

I've gotten this dump 3 times in the last hour or so

Caught internal_error: FileList::mark_completed(...) received a chunk that has already been finished. [#4115B3837A1A7EC8E57BE199D49E30C5AD3B6B23]
Stack dump not enabled.
stickz commented 5 months ago

received a chunk that has already been finished

Please make a new issue report. This is an entirely different situation, even if it's potentially the same crash reason.

newadventure079 commented 5 months ago

@stickz #317

stickz commented 5 months ago

@JackPo Docker edge is ready now with a potential fix to your problem. docker pull crazymax/rtorrent-rutorrent:edge

JackPo commented 5 months ago

implemented! Will circle back whether I still have crashes, thanks!!