Open tmisilo opened 8 months ago
Sep 27 08:38:47 backupauto ripcd: GPO 0:45 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:46 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:45 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:46 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:45 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:46 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:45 OFF
Sep 27 08:38:47 backupauto ripcd: GPO 0:46 OFF
Sep 27 08:38:47 backupauto kernel: Purging GPU memory, 16 pages freed, 10825 pages still pinned.
Sep 27 08:38:47 backupauto kernel: mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Sep 27 08:38:47 backupauto kernel: mysqld cpuset=/ mems_allowed=0
Sep 27 08:38:47 backupauto kernel: CPU: 2 PID: 9674 Comm: mysqld Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Sep 27 08:38:47 backupauto kernel: Hardware name: System manufacturer System Product Name/PRIME H370M-PLUS, BIOS 1303 03/20/2019
Sep 27 08:38:47 backupauto kernel: Call Trace:
Sep 27 08:38:47 backupauto kernel: [
Oct 2 06:55:00 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:00 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:01 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:02 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:03 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:04 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:05 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:06 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:07 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:44 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:40 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:45 OFF
Oct 2 06:55:08 backupauto ripcd: GPO 0:46 OFF
Oct 2 06:55:08 backupauto kernel: mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Oct 2 06:55:08 backupauto kernel: mysqld cpuset=/ mems_allowed=0
Oct 2 06:55:08 backupauto kernel: CPU: 1 PID: 7876 Comm: mysqld Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.99.1.el7.x86_64 #1
Oct 2 06:55:08 backupauto kernel: Hardware name: System manufacturer System Product Name/PRIME H370M-PLUS, BIOS 1303 03/20/2019
Oct 2 06:55:08 backupauto kernel: Call Trace:
Oct 2 06:55:08 backupauto kernel: [
Oct 8 00:48:44 backupauto kernel: Purging GPU memory, 24 pages freed, 10889 pages still pinned.
Oct 8 00:48:44 backupauto kernel: systemd-journal invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Oct 8 00:48:44 backupauto kernel: systemd-journal cpuset=/ mems_allowed=0
Oct 8 00:48:44 backupauto kernel: CPU: 0 PID: 432 Comm: systemd-journal Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.99.1.el7.x86_64 #1
Oct 8 00:48:44 backupauto kernel: Hardware name: System manufacturer System Product Name/PRIME H370M-PLUS, BIOS 1303 03/20/2019
Oct 8 00:48:44 backupauto kernel: Call Trace:
Oct 8 00:48:44 backupauto kernel: [
Oct 15 23:24:09 backupauto ripcd: GPO 0:41 OFF
Oct 15 23:24:09 backupauto ripcd: GPO 0:44 OFF
Oct 15 23:24:09 backupauto rdairplay: log engine: finished event: Line: 1189 Cart: 93006 Cut: 6 Card: 0 Stream: 0 Port: 2
Oct 15 23:24:09 backupauto caed: UnloadPlayback - Card: 0 Stream: 0 Handle: 155
Oct 15 23:24:09 backupauto ripcd: GPO 0:45 OFF
Oct 15 23:24:09 backupauto ripcd: GPO 0:46 OFF
Oct 15 23:24:10 backupauto ripcd: GPO 0:46 OFF
Oct 15 23:24:10 backupauto ripcd: GPO 0:40 OFF
Oct 15 23:24:10 backupauto caed: LoadPlayback Card: 0 Stream: 0 Name: /var/snd/093006_014.wav Handle: 156
Oct 15 23:24:10 backupauto ripcd: GPO 0:41 OFF
Oct 15 23:24:10 backupauto ripcd: GPO 0:44 OFF
Oct 15 23:24:10 backupauto caed: PlaybackPosition - Card: 0 Stream: 0 Pos: 0 Handle: 156
Oct 15 23:24:10 backupauto rdairplay: log engine: started audio cart: Line: 1190 Cart: 93006 Cut: 14 Pos: 0 Card: 0 Stream: 0 Port: 2
Oct 15 23:24:10 backupauto caed: Play - Card: 0 Stream: 0 Handle: 156 Length: 30000 Speed: 100000 Pitch: 0
Oct 15 23:24:10 backupauto ripcd: GPO 0:44 OFF
Oct 15 23:24:10 backupauto ripcd: GPO 0:45 OFF
Oct 15 23:24:40 backupauto ripcd: GPO 0:40 OFF
Oct 15 23:24:40 backupauto ripcd: GPO 0:41 OFF
Oct 15 23:24:40 backupauto rdairplay: log engine: finished event: Line: 1190 Cart: 93006 Cut: 14 Card: 0 Stream: 0 Port: 2
Oct 15 23:24:40 backupauto kernel: hpimsgx.c:508 ffffa03cc12e5800 trying to close 0 outstream 0 owned by (null)
Oct 15 23:24:40 backupauto caed: HPI Error: #104 - OBJ_NOT_OPEN, rdhpiplaystream.cpp line 822
Oct 15 23:24:40 backupauto caed: UnloadPlayback - Card: 0 Stream: 0 Handle: 156
Oct 15 23:24:40 backupauto ripcd: GPO 0:44 OFF
Oct 15 23:24:40 backupauto ripcd: GPO 0:45 OFF
Oct 15 23:24:42 backupauto caed: LoadPlayback Card: 0 Stream: 0 Name: /var/snd/019018_001.wav Handle: 157
Oct 15 23:24:43 backupauto ripcd: GPO 0:46 OFF
Oct 15 23:24:43 backupauto caed: PlaybackPosition - Card: 0 Stream: 0 Pos: 0 Handle: 157
Oct 15 23:24:43 backupauto caed: Play - Card: 0 Stream: 0 Handle: 157 Length: 14829 Speed: 100000 Pitch: 0
Oct 15 23:24:43 backupauto rdairplay: log engine: started audio cart: Line: 1191 Cart: 19018 Cut: 1 Pos: 0 Card: 0 Stream: 0 Port: 2
Oct 15 23:24:43 backupauto ripcd: GPO 0:40 OFF
Oct 15 23:24:43 backupauto ripcd: GPO 0:41 OFF
Oct 15 23:24:57 backupauto caed: StopPlayback - Card: 0 Stream: 0 Handle: 157
Oct 15 23:24:57 backupauto ripcd: GPO 0:44 OFF
Oct 15 23:24:58 backupauto ripcd: GPO 0:40 OFF
Oct 15 23:24:58 backupauto caed: LoadPlayback Card: 0 Stream: 1 Name: /var/snd/037775_001.wav Handle: 158
Oct 15 23:24:58 backupauto kernel: Purging GPU memory, 32 pages freed, 9692 pages still pinned.
Oct 15 23:24:58 backupauto kernel: systemd-journal invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=0
Oct 15 23:24:58 backupauto kernel: systemd-journal cpuset=/ mems_allowed=0
Oct 15 23:24:58 backupauto kernel: CPU: 1 PID: 432 Comm: systemd-journal Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.99.1.el7.x86_64 #1
Oct 15 23:24:58 backupauto kernel: Hardware name: System manufacturer System Product Name/PRIME H370M-PLUS, BIOS 1303 03/20/2019
Oct 15 23:24:58 backupauto kernel: Call Trace:
Oct 15 23:24:58 backupauto kernel: [
Oct 14 20:50:29 StudioCProd kernel: X invoked oom-killer: gfp_mask=0xa04d2, order=0, oom_score_adj=0
Oct 14 20:50:29 StudioCProd kernel: X cpuset=/ mems_allowed=0
Oct 14 20:50:29 StudioCProd kernel: CPU: 2 PID: 1442 Comm: X Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Oct 14 20:50:29 StudioCProd kernel: Hardware name: System manufacturer System Product Name/H170M-E D3, BIOS 0902 11/16/2015
Oct 14 20:50:29 StudioCProd kernel: Call Trace:
Oct 14 20:50:29 StudioCProd kernel: [
Oct 14 20:50:29 StudioCProd kernel: X invoked oom-killer: gfp_mask=0xa04d2, order=0, oom_score_adj=0
Oct 14 20:50:29 StudioCProd kernel: X cpuset=/ mems_allowed=0
Oct 14 20:50:29 StudioCProd kernel: CPU: 2 PID: 1442 Comm: X Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Oct 14 20:50:29 StudioCProd kernel: Hardware name: System manufacturer System Product Name/H170M-E D3, BIOS 0902 11/16/2015
Oct 14 20:50:29 StudioCProd kernel: Call Trace:
Oct 14 20:50:29 StudioCProd kernel: [
Oct 14 20:50:29 StudioCProd kernel: X invoked oom-killer: gfp_mask=0xa04d2, order=0, oom_score_adj=0
Oct 14 20:50:29 StudioCProd kernel: X cpuset=/ mems_allowed=0
Oct 14 20:50:29 StudioCProd kernel: CPU: 2 PID: 1442 Comm: X Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Oct 14 20:50:29 StudioCProd kernel: Hardware name: System manufacturer System Product Name/H170M-E D3, BIOS 0902 11/16/2015
Oct 14 20:50:29 StudioCProd kernel: Call Trace:
Oct 14 20:50:29 StudioCProd kernel: [
Oct 26 17:44:33 StudioCProd kernel: rdvairplayd invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
Oct 26 17:44:33 StudioCProd kernel: rdvairplayd cpuset=/ mems_allowed=0
Oct 26 17:44:33 StudioCProd kernel: CPU: 2 PID: 32293 Comm: rdvairplayd Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Oct 26 17:44:33 StudioCProd kernel: Hardware name: System manufacturer System Product Name/H170M-E D3, BIOS 0902 11/16/2015
Oct 26 17:44:33 StudioCProd kernel: Call Trace:
Oct 26 17:44:33 StudioCProd kernel: [
Oct 26 17:44:33 StudioCProd kernel: rdvairplayd invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=0
Oct 26 17:44:33 StudioCProd kernel: rdvairplayd cpuset=/ mems_allowed=0
Oct 26 17:44:33 StudioCProd kernel: CPU: 2 PID: 32293 Comm: rdvairplayd Kdump: loaded Tainted: G OE ------------ 3.10.0-1160.95.1.el7.x86_64 #1
Oct 26 17:44:33 StudioCProd kernel: Hardware name: System manufacturer System Product Name/H170M-E D3, BIOS 0902 11/16/2015
Oct 26 17:44:33 StudioCProd kernel: Call Trace:
Oct 26 17:44:33 StudioCProd kernel: [
Hi! good news... I was waiting for someone to be able to reproduce that too!
Very possibly related with this issue I opened time ago during beta-testing:
https://github.com/ElvishArtisan/rivendell/issues/865
Cheers.
Yup. Good to get some additional data.
@tmisilo, can you tell us a bit about your GPIO set on that host? Specifically, Switcher Type' being used in its Rivendell config, plus the connection details and make/model of the GPIO hardware (if any).
Found it! Resource leak when processing RMLs.
wow... but, why does recordings do exarcerbate the leak? I mean... In my graphs, I see that ripcd does grow in RAM usage all the time, although very slowly (which matches with some leak on every ocasional RML executed around), but during a recording, ripcd RAM usage literelly skyrockets... or maybe during a recording a lot of RML are run under the hood?
Cheers!
Will this get back ported to 3.6.X?
On our station that uses a lot of RMLs, we have to reboot occasionally due to resource depletion. This typically happens when the machine has frozen up and taken us off the air.
Cheers!
Glenn L. Hickman Jr. Director, Radio Frequency Engineering
[WHRO Public Media]https://www.whro.org/
5200 Hampton Boulevard | Norfolk, VA 23508 P:757-575-5064 whro.org
From: Fred Gleason @.***> Sent: Wednesday, November 8, 2023 09:29 To: ElvishArtisan/rivendell Cc: Subscribed Subject: Re: [ElvishArtisan/rivendell] Possible Memory Leak: ripcd (Issue #920)
Found it! Resource leak when processing RMLs.
— Reply to this email directly, view it on GitHubhttps://github.com/ElvishArtisan/rivendell/issues/920#issuecomment-1802001913, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABUZXEAJ65SD7DF62XKCVXTYDOJNRAVCNFSM6AAAAAA6WGIVT6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBSGAYDCOJRGM. You are receiving this because you are subscribed to this thread.Message ID: @.***>
Get local news from WHRO every weekday in your inbox. Sign up today!https://support.whro.org/journalism-enews-optin#/
wow... but, why does recordings do exarcerbate the leak?
Good question! We'll fix this, and see what happens...
Will this get back ported to 3.6.X?
Yes.
Fixed in 8d9e5e8 [branch v4]. Please test!
I hope that commit fixes leaking while dealing with RMLs... but it's going to be hard to assert on my side, since it has not fixed the huge memory leak during Thursday night sessions recordings (which blanks any detail for minor leaking elsewhere). On monday, after weekend recordings/uploads, as I will restart rivendell to clear RAM, I will have 4 days with no recordings, so I may be able to notice a flat (or flater) ripcd process graph.
Cheers.
Thanks Alejandro. I will keep this open pending results from your test.
Hi... Today morning I have done the weekly restart to clear the RAM/Swap from recordings. Munin graphing on memory usage of rivendell processes keeps on (On january I'll have a full year of graphs! :-), so everything is registered. On thursday evening (before the night recording) I expect to have enough 'idle' activity graph to compare with those of previous weeks, and see if the graph stops its steady climb... wil report.
Here're the graphs (pre-commit, after-commit).
I see no difference... ripcd (caed seems to behave alike) slowly but steadily increases its memory usage. However, this doesn't prove that a peak/acute in leaking while intensive GPIO activity has not been fixed with lastcode changes... I don't use such GPIO stuff in my VM, so I can't assert anything beyond that, for an 'idle' playout rivendell instance, those processes keep eating memory.
I've been reading about a tool called valgrind, that is supposed to help finding memory leaks... but I've read that its usage causes quite a noticeable overhead and hit in performance, since it makes the running of the program under scrutiny to require far more resources, so I'm not sure wether my VM is an adequate environment to such a test. Also, I'm not experienced on its usage, so I'm unsure on how should I proceed... maybe some experienced person could try to run ripcd under valgrind or a similar tool on a resourceful environment.
EDIT: Managed to run ripcd under valgrind (early feasibility tests... not sure what I'm doing). Tonight it will definitely leak, probably will also during the day, so I expect some results gathered on the log, although maybe I'm not using the right/optimal arguments for valgrind to get the problem.
Hi. Here's an update on my research in memory leaking.
First and foremost, I've been surprised to see that, once running ripcd under valgrind, the leaking has been contained. For the first time in almost a year, the recordings and uploads have been completed (apparently, no listening through though) successfully without getting the system RAM depleted.
However, I think that the logged information of this first test may not be of much value, since I missed the required parameter for valgrind to trace memory leaking on child threads when I launched it last friday. I have stopped ripcd under valgrind, saved the log file, and launch it again, this time with trace-children enabled.
So far, the summary of this early tests is:
==1279884== LEAK SUMMARY: ==1279884== definitely lost: 0 bytes in 0 blocks ==1279884== indirectly lost: 0 bytes in 0 blocks ==1279884== possibly lost: 896 bytes in 2 blocks ==1279884== still reachable: 5,696,699 bytes in 837 blocks ==1279884== suppressed: 48 bytes in 1 blocks ==1279884== ==1279884== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) --1279884-- --1279884-- used_suppression: 1 dtv-addr-init /usr/lib/x86_64-linux-gnu/valgrind/default.supp:1464 suppressed: 48 bytes in 1 blocks ==1279884== ==1279884== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
I can provide the whole log file if required (it is quite a long one!) However, I think that the current run of valgrind may produce a log way more interesting.
Cheers.
So, progress. Thank you for sticking with this Alejandro! We'll stand by for your next update.
Hi again. Today I got a new valgrind log, logging roughly one day of operation, which includes a recording, but I fail to see much difference from the previous one, a week long log of operations... in the end, I'm not skilled enough to grasp what's going on there, and I'm unsure whether all this valgrind stuff will ultimatelly help. Here they are, just in case some useful information is actually there:
valgrind-ripcd_1.log valgrind-ripcd_2.log
Cheers.
I have observed the issue on two sperate systems, all running 4.1 on CentOS7 StudioCProd: Client in a client Server Setup, primary role is running airplay for HD2, recording hourly off air files. backupauto: Self contained system that provides backup Primary, HD2 as well as a Silence Source at the transmitter. This system is also using the Whetnet GPIO Driver to fire salvos through rdcatch.
I will comment below with example output from the logs when this happens.
If you need any other information please reach out.