Closed scottgrobinson closed 2 years ago
On further investigation, this was caused by an oom issue - I had limited my docker container to 2Gb RAM whilst trying to narrow down a separate issue with liquidsoap - It seems to have a memory leak (I may be completely wrong there though)
Note the graph below where cpu and memory just keep going on a sliding scale upwards. Same question as above, how can I assist debugging that? :)
Feb 21 07:58:48 streaming kernel: [41695.786434] liquidsoap invoked oom-killer: gfp_mask=0x14000c0(GFP_KERNEL), nodemask=(null), order=0, oom_score_adj=0
Feb 21 07:58:48 streaming kernel: [41695.786438] liquidsoap cpuset=459fb4c2f859b194bd469e4a2d9b79cc371244d2ea3fab3caef8cebb33cd8a0b mems_allowed=0
Feb 21 07:58:48 streaming kernel: [41695.786442] CPU: 1 PID: 4502 Comm: liquidsoap Not tainted 4.14.138-rancher #1
Feb 21 07:58:48 streaming kernel: [41695.786442] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090008 12/07/2018
Feb 21 07:58:48 streaming kernel: [41695.786443] Call Trace:
Feb 21 07:58:48 streaming kernel: [41695.786451] dump_stack+0x5a/0x6f
Feb 21 07:58:48 streaming kernel: [41695.786455] dump_header+0x94/0x217
Feb 21 07:58:48 streaming kernel: [41695.786457] ? _raw_spin_unlock_irqrestore+0x16/0x18
Feb 21 07:58:48 streaming kernel: [41695.786458] oom_kill_process+0x83/0x366
Feb 21 07:58:48 streaming kernel: [41695.786460] out_of_memory+0x3a8/0x3c8
Feb 21 07:58:48 streaming kernel: [41695.786462] mem_cgroup_out_of_memory+0x3d/0x56
Feb 21 07:58:48 streaming kernel: [41695.786464] mem_cgroup_oom_synchronize+0x25d/0x271
Feb 21 07:58:48 streaming kernel: [41695.786466] ? mem_cgroup_is_descendant+0x48/0x48
Feb 21 07:58:48 streaming kernel: [41695.786467] pagefault_out_of_memory+0x1f/0x4c
Feb 21 07:58:48 streaming kernel: [41695.786470] __do_page_fault+0x3d7/0x433
Feb 21 07:58:48 streaming kernel: [41695.786472] ? page_fault+0x2f/0x50
Feb 21 07:58:48 streaming kernel: [41695.786473] page_fault+0x45/0x50
Feb 21 07:58:48 streaming kernel: [41695.786475] RIP: 3acf5c3c:0x7fcca12f2040
Feb 21 07:58:48 streaming kernel: [41695.786476] RSP: e0004400:000000003ad0db57 EFLAGS: 3acf6000
Feb 21 07:58:48 streaming kernel: [41695.786477] Task in /docker/563790f30fc99a324c9f7e931032d8cc1674b7430e0ef02ed8a41a367e84dad9/docker/459fb4c2f859b194bd469e4a2d9b79cc371244d2ea3fab3caef8cebb33cd8a0b killed as a result of limit of /docker/563790f30fc99a324c9f7e931032d8cc1674b7430e0ef02ed8a41a367e84dad9/docker/459fb4c2f859b194bd469e4a2d9b79cc371244d2ea3fab3caef8cebb33cd8a0b
Feb 21 07:58:48 streaming kernel: [41695.786482] memory: usage 2097152kB, limit 2097152kB, failcnt 31
Feb 21 07:58:48 streaming kernel: [41695.786482] memory+swap: usage 2097152kB, limit 4194304kB, failcnt 0
Feb 21 07:58:48 streaming kernel: [41695.786483] kmem: usage 6300kB, limit 9007199254740988kB, failcnt 0
Feb 21 07:58:48 streaming kernel: [41695.786483] Memory cgroup stats for /docker/563790f30fc99a324c9f7e931032d8cc1674b7430e0ef02ed8a41a367e84dad9/docker/459fb4c2f859b194bd469e4a2d9b79cc371244d2ea3fab3caef8cebb33cd8a0b: cache:12KB rss:2090840KB rss_huge:2058240KB shmem:0KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:0KB active_anon:2090824KB inactive_file:8KB active_file:4KB unevictable:16KB
Feb 21 07:58:48 streaming kernel: [41695.786490] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
Feb 21 07:58:48 streaming kernel: [41695.786537] [ 4280] 10000 4280 578 143 5 3 0 0 tini
Feb 21 07:58:48 streaming kernel: [41695.786540] [ 4468] 10000 4468 752519 536709 1139 7 0 0 liquidsoap
Feb 21 07:58:48 streaming kernel: [41695.786542] Memory cgroup out of memory: Kill process 4468 (liquidsoap) score 1025 or sacrifice child
Feb 21 07:58:48 streaming kernel: [41695.786566] Killed process 4468 (liquidsoap) total-vm:3010076kB, anon-rss:2090068kB, file-rss:56768kB, shmem-rss:0kB
Feb 21 07:58:48 streaming kernel: [41695.799508] oom_reaper: reaped process 4468 (liquidsoap), now anon-rss:16kB, file-rss:0kB, shmem-rss:0kB
Thanks for reporting! You might want to have a look at this discussion: https://github.com/AzuraCast/AzuraCast/issues/5010#issuecomment-1030131029
Closing here for now. Please feel free to open another ticket if needed.
Describe the bug Liquidsoap seems to restart without any errors or crash notifications. The streams disconnect and the only logline I can see is "LOG START" indicating that liquidsoap has started up again (Liquidsoap running in a docker container). I have played the same song shown just before "LOG START" again to see if it was linked to that in any way, but nothing happened that time around/liquidsoap continued as expected.
LOG START seen at 07:58:51
Whilst I assume there's not much to go on here, any guidance as to what can be done to help capture additional details for next time would be appreciated.
To Reproduce Unclear of reproduction steps (however the script used is shown below)
Expected behavior Liqduisoap not to crash!
Version details
Install method Docker via savonet/liquidsoap:v2.0.3