Closed JackPo closed 2 months ago
There is a known crashing issue with unaligned memory access. I'm waiting for #310 to be merged. This can happen when you're hashing torrents.
Doh, in the mean time I just purged all of my session data and it's currently restarting from blank slate adding all torrents back because I wasn't sure what was going on. Will update once this is merged.
On Sun, Jan 14, 2024 at 6:55 AM stickz @.***> wrote:
There is a known crashing issue with unaligned memory access. I'm waiting for #310 https://github.com/crazy-max/docker-rtorrent-rutorrent/pull/310 to be merged. This can happen when you're hashing torrents.
— Reply to this email directly, view it on GitHub https://github.com/crazy-max/docker-rtorrent-rutorrent/issues/315#issuecomment-1890974632, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA7XDTKDVMORO7YKXFJ6O4LYOPWUPAVCNFSM6AAAAABB2AKRZWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJQHE3TINRTGI . You are receiving this because you authored the thread.Message ID: @.***>
I've gotten this dump 3 times in the last hour or so
Caught internal_error: FileList::mark_completed(...) received a chunk that has already been finished. [#4115B3837A1A7EC8E57BE199D49E30C5AD3B6B23]
Stack dump not enabled.
received a chunk that has already been finished
Please make a new issue report. This is an entirely different situation, even if it's potentially the same crash reason.
@stickz #317
@JackPo Docker edge is ready now with a potential fix to your problem. docker pull crazymax/rtorrent-rutorrent:edge
implemented! Will circle back whether I still have crashes, thanks!!
Support guidelines
I've found a bug and checked that ...
Description
Constant segmentation fault after starting the docker container crazy_max rtorrent_rutorrent. Looked through a few recent issues and running :edge and still doesn't fix the issue (with UDP tured off). Don't know how to turn stack trace on in the docker, but the log file is nothing out of the ordinary other than lots of timeouts and it's hashing.
Seeding probably around 10K files or so ( don't remember).
Expected behaviour
Not to seg fault :)
Actual behaviour
segfaulting
Steps to reproduce
1) Start Container 2) wait 30 minutes 3) Seg faults, and it will keep seg faulting every 30 minutes.
Docker info
Docker Compose config
No response
Logs
Additional info
No response