CyferShepard / Jellystat

Jellystat is a free and open source Statistics App for Jellyfin
MIT License
1k stars 30 forks source link

Persistent jfstat on system reboot #246

Open ohaiimchris opened 1 week ago

ohaiimchris commented 1 week ago

Howdy, I had posted a few weeks ago about having an issue on system reboot where I wasn't able to boot up Jellystat as it was stuck in a loop. It appears that Postgre part of the docker-compose file i'm using (copied directly from here) keeps trying to create a jfstat when it already exists. I've tried emptying out the Postgre-data and backup-data folders. Any ideas? Here's my Postgre log ` PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-09-13 13:36:59.375 UTC [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit 2024-09-13 13:36:59.375 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2024-09-13 13:36:59.375 UTC [1] LOG: listening on IPv6 address "::", port 5432 2024-09-13 13:36:59.378 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2024-09-13 13:36:59.383 UTC [29] LOG: database system was shut down at 2024-09-13 13:36:53 UTC 2024-09-13 13:36:59.387 UTC [1] LOG: database system is ready to accept connections 2024-09-13 13:37:00.196 UTC [33] ERROR: database "jfstat" already exists 2024-09-13 13:37:00.196 UTC [33] STATEMENT: CREATE DATABASE jfstat 2024-09-13 13:41:59.471 UTC [27] LOG: checkpoint starting: time 2024-09-13 13:41:59.791 UTC [27] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.305 s, sync=0.006 s, total=0.321 s; sync files=5, longest=0.002 s, average=0.002 s; distance=6 kB, estimate=6 kB 2024-09-13 13:57:00.079 UTC [27] LOG: checkpoint starting: time 2024-09-13 13:57:00.498 UTC [27] LOG: checkpoint complete: wrote 5 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.403 s, sync=0.007 s, total=0.419 s; sync files=5, longest=0.005 s, average=0.002 s; distance=7 kB, estimate=7 kB 2024-09-13 14:57:01.544 UTC [27] LOG: checkpoint starting: time 2024-09-13 14:57:02.165 UTC [27] LOG: checkpoint complete: wrote 7 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.605 s, sync=0.007 s, total=0.621 s; sync files=7, longest=0.004 s, average=0.001 s; distance=15 kB, estimate=15 kB 2024-09-13 15:12:01.415 UTC [27] LOG: checkpoint starting: time 2024-09-13 15:12:01.941 UTC [27] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.505 s, sync=0.006 s, total=0.527 s; sync files=5, longest=0.002 s, average=0.002 s; distance=8 kB, estimate=14 kB 2024-09-13 15:27:02.220 UTC [27] LOG: checkpoint starting: time 2024-09-13 15:27:02.737 UTC [27] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.504 s, sync=0.007 s, total=0.517 s; sync files=6, longest=0.004 s, average=0.002 s; distance=9 kB, estimate=14 kB 2024-09-13 15:47:03.117 UTC [27] LOG: checkpoint starting: time 2024-09-13 15:47:03.540 UTC [27] LOG: checkpoint complete: wrote 5 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.404 s, sync=0.007 s, total=0.423 s; sync files=5, longest=0.005 s, average=0.002 s; distance=11 kB, estimate=14 kB 2024-09-13 15:57:03.727 UTC [27] LOG: checkpoint starting: time 2024-09-13 15:57:04.251 UTC [27] LOG: checkpoint complete: wrote 6 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.504 s, sync=0.007 s, total=0.524 s; sync files=6, longest=0.004 s, average=0.002 s; distance=12 kB, estimate=13 kB`

And here's my Jellystat log: https://pastebin.com/xmYps7Lw

Issue goes away if I do docker-compose down/docker compose up -d, but will then be stuck in another loop if my system restarts again.

CyferShepard commented 1 week ago

Hey @ohaiimchris , this looks like a postgres issue not a jellystat one. Behavior like this sometimes arises when the db terminates suddenly eg due to powerloss, improper shutdowns or OS crashes. It happened to me aswell when my server was unplugged by mistake.