robotastic / trunk-recorder

Records calls from a Trunked Radio System (P25 & SmartNet)
GNU General Public License v3.0
871 stars 195 forks source link

Audio in wrong WAV file - Pi 4 #731

Closed aaknitt closed 1 year ago

aaknitt commented 2 years ago

Raspberry Pi 4 is having issues with audio from the end of one call ending up in the start of the audio file of a subsequent call. Seems to get worse the longer trunk-recorder is running. Seems to be related to audio from a specific recorder getting stuck and not getting flushed to the current WAV file and ending up in the next WAV for that same recorder.

Issue does not appear to be present in release 4.3.2 but is in 4.4.0 and 4.4.1.

Mar 29 d616a86 commit seems ok Apr 3 1697f7e commit seems ok Apr 5 7036348 commit seems ok Apr 7 71564fa commit seems ok Apr 7 669ca00 commit has the issue Apr 7 b2786c9 commit has the issue Apr 9 dd3e6bf commit has the issue

Normal/correct transmission on paging TGID: image

Transmission from another TGID on the same frequency ending up at the end of the same WAV file (this is sometimes present in 4.3.2 and is not the issue in question): image

Audio from prior transmission of the same recorder at the beginning of WAV file (this is the issue in question): image

robotastic commented 2 years ago

Thanks @aaknitt - If I got things in there correctly, this should be the diff showing what was added in the commit that broke things: https://github.com/robotastic/trunk-recorder/compare/7036348...669ca00

Just to check - Is the audio ending up in only one of the Wav files or is there some duplication where the same audio is 2 different files?

Are you able to share the logs around when you are having the issue? They may help show what is happening. I can add in some additional debug messages if needed.

With the latest change it should be writing everything that is in the Recorder out to a wav... I wonder if it has something to do with the Pi.

Would you be able to run one of the versions that is not working on a more powerful computer?

aaknitt commented 2 years ago

@robotastic as best I can tell, the last commit without the issue is 71564fa and the first one with the issue is 669ca00.

A log file is attached that shows a recording of TGID 58909. Some of the highlights are pasted below. The TGID is granted a channel at 20:36:00 and the last grant update message is received at 20:36:34.5. 5 seconds later, the call is removed due to inactivity. So the call is active for 39 seconds, which is reflected in the JSON file (also attached). The actual length of audio is about 32 seconds, but only 24.2 seconds make it into the WAV associated with this call. The remaining 7.6 seconds end up in the WAV file for the next call on that TGID, which doesn't occur until 21:15. In this setup I'm only recoding that single TGID. If my memory is correct, if I'm set up to record more than one TGID, the "missing" audio would end up in the WAV file for the next call that uses the same recorder (vs. same frequency). I don't think there's duplication of audio going on...it seems like the end of one call gets put in the beginning of the WAV of the next call, and this continues and it gets worse and worse over time.

I'll try the Docker image for 4.4.1 on an Ubuntu machine when I get a chance.

[2022-10-31 20:36:00.101324] (debug) tsbk00 Chan Grant Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 sa 6800152 Encrypt 0 Bandwidth: 12.5 [2022-10-31 20:36:00.101464] (trace) tsbk00 Chan Grant Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 sa 6800152 Encrypt 0 Bandwidth: 12.5 [2022-10-31 20:36:00.101577] (info) [TrunkedSystem] 154C TG: 58909 ( All Pa) Freq: 852.800000 MHz Starting P25 Recorder Num [2] TDMA: false Slot: 0 QPSK: true [2022-10-31 20:36:00.101775] (trace) [TrunkedSystem] 154C TG:  58909 Freq: 852.800000 MHz Starting wavfile sink SRC ID: 6800152 [2022-10-31 20:36:00.304967] (debug) tsbk00 Chan Grant Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 sa 6800152 Encrypt 0 Bandwidth: 12.5 [2022-10-31 20:36:00.306301] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:00.734373] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:00.734990] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:00.766537] (trace) [TrunkedSystem] 154C TG:  58909 Freq: 852.800000 MHz Starting new Transmission Src ID: 6800152 [2022-10-31 20:36:01.087434] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:01.087978] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:01.487394] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 [2022-10-31 20:36:01.945488] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 ... ... [2022-10-31 20:36:34.489489] (trace) tsbk02 Grant Update Channel ID: 287 Freq: 852.800000 MHz ga 58909 TDMA -1 ... ... [2022-10-31 20:36:40.008910] (trace) [TrunkedSystem] 154C TG: 58909 ( All Pa) Freq: 852.800000 MHz  Removing call that has been inactive for more than 5 Sec  Rec last write: 1 State: 1 [2022-10-31 20:36:40.009423] (trace) last_update: 1667266594 diff: 6 [2022-10-31 20:36:40.009397] (trace) [TrunkedSystem] 154C TG: 58909 ( All Pa) Freq: 852.800000 MHz Concluding Recorded Call - Last Update: 6s Call Elapsed: 40 [2022-10-31 20:36:40.010248] (trace) stop_recording() - stopping wavfile sink but recorder state is: 1 [2022-10-31 20:36:40.010429] (info) [TrunkedSystem] 154C TG:  58909 Freq: 852.800000 MHz - Transmission src: 6800152 pos: 0 length: 24.192 [2022-10-31 20:36:40.011073] (trace) last_update: 1667266599 diff: 1 [2022-10-31 20:36:40.091923] (trace) Converting: recordings/TrunkedSystem/2022/10/31/58909-1667266560_852800000-call_154.m4a [2022-10-31 20:36:40.092081] (trace) Command: sox recordings/TrunkedSystem/2022/10/31/58909-1667266560_852800000-call_154.wav --norm -t wav - | fdkaac --silent -p 2 --ignorelength -b 8000 -o recordings/TrunkedSystem/2022/10/31/58909-1667266560_852800000-call_154.m4a -

10-31-2022_2022_00.log

58909-1667266560_852800000-call_154.zip

robotastic commented 2 years ago

Thanks for sharing - I will go look through the log files. In the snippet you shared, it looks like it is not getting the End Of Transmission message on the voice channel.... or more likely, the voice channel is really far behind the control channel. So when the Control Channel stops sending Updates for that call, the voice channel is only half way through processing the buffer for that call. The Pi 4 should be fast enough to handle one channel, easily. I wonder if you have a very slow memory card if that could be holding things back?

One thing to try is using a RAM FS instead of the SD for the Capture dir and see if that changes things.

aaknitt commented 2 years ago

Thanks for sharing - I will go look through the log files. In the snippet you shared, it looks like it is not getting the End Of Transmission message on the voice channel.... or more likely, the voice channel is really far behind the control channel. So when the Control Channel stops sending Updates for that call, the voice channel is only half way through processing the buffer for that call. The Pi 4 should be fast enough to handle one channel, easily. I wonder if you have a very slow memory card if that could be holding things back?

One thing to try is using a RAM FS instead of the SD for the Capture dir and see if that changes things.

I was thinking along the same lines, but prior versions work well on the same Pi with the same SD card, etc.

aaknitt commented 2 years ago

@robotastic I may be totally barking up the wrong tree here since I'm not well-versed in understanding the diffs and commit history, but it looks like perhaps the change from nonstop_wavfile_sink_impl.cc to transmission_sink.cc could be related?

In transmission_sink.cc it only calls wav_write_sample() when state == RECORDING but that wasn't the case in nonstop_wavfile_sink_impl.cc. nonstop_wavfile_sink_impl.cc may even hold it in the RECORDING state until everything gets written? I'm not very confident I'm interpreting any of this correctly, but it seems like this may be a logical place for things to go wrong.

tadscottsmith commented 2 years ago

If you're comfortable modifying the code, changing this line from 0 to 10, rebuilding, and capturing the additional logging while only recording that one TG, would probably be super helpful.

https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/recorders/p25_recorder_decode.cc#L86

aaknitt commented 2 years ago

If you're comfortable modifying the code, changing this line from 0 to 10, rebuilding, and capturing the additional logging while only recording that one TG, would probably be super helpful.

https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/recorders/p25_recorder_decode.cc#L86

Will do

robotastic commented 2 years ago

I created a debug branch and that turns on some additional debug statements. I have been getting some reports on my DC system that some calls are getting cut off.... and the audio is just getting lost. Give this branch a try: https://github.com/robotastic/trunk-recorder/tree/debug-call-mismatch

aaknitt commented 2 years ago

Here's a log from the debug-call-mismatch branch. Search for 58909, which is the only TGID being recorded.
11-01-2022_2238_00.zip

tadscottsmith commented 2 years ago

If you're able to rebuild it with the verbosity cranked up, I have a hunch.

aaknitt commented 2 years ago

If you're able to rebuild it with the verbosity cranked up, I have a hunch.

verbosity was set to 10 when I built.

tadscottsmith commented 2 years ago

Can you double check? I know he set it back to 0 in his last commit on the debug branch. With it at 10, you should see much more detailed logs similar to this...

11/02/22 19:10:01.109880 [0] IMBE 9a 3f 22 ef 7e 2f ff fc a3 10 3b errs 0
11/02/22 19:10:01.111191 [0] IMBE 9a bf 63 83 b2 03 ff f0 8e cc 0a errs 0
11/02/22 19:10:01.112511 [0] IMBE 8e c1 f4 8d bf 93 00 09 1b 65 bf errs 0
11/02/22 19:10:01.113527 [0] IMBE 9a bf b7 e6 36 32 87 f4 7a 50 56 errs 0
11/02/22 19:10:01.114212 [0] IMBE 9a 66 ea 42 e6 26 ff f0 b5 96 7f errs 2
11/02/22 19:10:01.115181 [0] IMBE 99 fe 44 72 39 41 f8 1e 8c e9 9c errs 0
11/02/22 19:10:01.115993 [0] IMBE 96 a3 dc 7b c5 d4 ff 8a 09 43 4f errs 0
11/02/22 19:10:01.116801 [0] IMBE 96 cf 94 79 ad d4 ff b8 37 09 aa errs 0
11/02/22 19:10:01.138303 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=1, lco=34 : 62 00 34 0f 01 02 01 7d 70, gly_errs=0
11/02/22 19:10:01.138893 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=1, lco=2 : 42 00 2d 01 f8 01 09 0e a6, ch_A=45, grp_A=504, ch_B=265, grp_B=3750, gly_errs=0
11/02/22 19:10:01.139487 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=0, lco=0 : 00 00 04 00 09 01 00 00 00, srcaddr=0, grpaddr=2305, gly_errs=0
11/02/22 19:10:01.190265 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=0, lco=0 : 00 00 04 00 69 ef 00 00 00, srcaddr=0, grpaddr=27119, gly_errs=0
11/02/22 19:10:01.190432 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=0, lco=0 : 00 00 04 00 69 ef 00 00 00, srcaddr=0, grpaddr=27119, gly_errs=0
11/02/22 19:10:01.214195 [0] NAC 0x40d HDU:  ESS: tgid=11558, mfid=0, algid=80, keyid=0, mi=00 00 00 00 00 00 00 00 00, gly_errs=0, rs_errs=0
11/02/22 19:10:01.219082 [0] NAC 0x40d LDU2: ESS: algid=80, keyid=0, mi=00 00 00 00 00 00 00 00 00, rs_errs=3
11/02/22 19:10:01.219181 [0] IMBE 92 d7 54 53 0e 69 ff c9 e2 c9 af errs 0
11/02/22 19:10:01.219925 [0] IMBE 92 d7 74 79 ca 68 ff f9 71 60 d2 errs 0
11/02/22 19:10:01.220608 [0] IMBE 8e 6f 6a 73 43 48 ff ee 52 77 4f errs 0
11/02/22 19:10:01.221346 [0] IMBE 8e 5e 91 1e bd c7 ff c1 e9 50 08 errs 0
11/02/22 19:10:01.221993 [0] IMBE 89 da 58 2e 7e f6 ff cd 0f 79 27 errs 0
11/02/22 19:10:01.222598 [0] IMBE 89 db ed 8a 7c 14 f9 c5 6d 90 3e errs 0
11/02/22 19:10:01.223178 [0] IMBE 89 db cc 81 2e d0 f8 02 51 95 c7 errs 0
11/02/22 19:10:01.223659 [0] IMBE 99 a5 f3 c6 a6 e2 98 0a c2 cc e4 errs 2
11/02/22 19:10:01.224155 [0] IMBE a5 28 f9 94 63 a9 80 0a f9 4c 0b errs 0
11/02/22 19:10:01.247667 [0] NAC 0x40d TDU3:
11/02/22 19:10:01.248300 [0] NAC 0x40d TDU3:
11/02/22 19:10:01.300848 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=0, lco=0 : 00 00 04 00 69 ef 00 00 00, srcaddr=0, grpaddr=27119, gly_errs=0
11/02/22 19:10:01.300994 [0] NAC 0x40d TDU15:  LCW: ec=0, pb=0, sf=1, lco=34 : 62 00 34 0f 01 01 05 c5 70, gly_errs=0
11/02/22 19:10:01.301051 [0] NAC 0x40d TDU3:
11/02/22 19:10:01.301084 [0] NAC 0x40d TDU3:
11/02/22 19:10:01.301115 [0] NAC 0x40d TDU3:
11/02/22 19:10:01.321408 [0] NAC 0x40d LDU1: LCW: ec=0, pb=0, sf=1, lco=34 : 62 00 34 0f 01 01 05 c5 70
aaknitt commented 2 years ago

I'm not sure what I might be doing wrong, any ideas? Anything else I need to change or have set in a certain way in the config.json file? I changed verbosity to 10 prior to build but I'm not seeing anything like that in the logs.
image

tadscottsmith commented 2 years ago

That looks right. Only think I can think of is that WinSCP (or whatever tool you're using) isn't working with Git? Can you SSH directly in and verify?


cat /home/pi/trunk-recorder/trunk-recorder/recorders/p25_recorder_decode.cc | grep 'int verbosity'
aaknitt commented 2 years ago

That looks right. Only think I can think of is that WinSCP (or whatever tool you're using) isn't working with Git? Can you SSH directly in and verify?

cat /home/pi/trunk-recorder/trunk-recorder/recorders/p25_recorder_decode.cc | grep 'int verbosity'

image

tadscottsmith commented 2 years ago

@aaknitt That's weird! That should be displaying a ton of debug messages.

@robotastic Just thought I'd jot this theory down while it's in my head. I don't have a good way to know if this is really the issue or not...

  1. The system isn't sending a lot of TDU/TDULC messages after the end of the transmission.
  2. Too few of TDUs is causing this section of code not to be called, since there's never a TDU while amt_produce == 0. https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/lib/op25_repeater/lib/p25_frame_assembler_impl.cc#L226
  3. The transmission_sink never gets a tag to end_transmission().
  4. Back in main.cc, manage_calls() is setting the call state to COMPLETED and calls conclude_call().
  5. Conclude_call() attempts to stop() the recorder.
  6. Things make their way down to the transmission_sink::stop_recording() but by this time samples have all been written and this line never calls end_transmission() and the wav file isn't closed. d_fp also isn't reset, so some of the error checking on opening the next wav file isn't successful. https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/gr_blocks/transmission_sink.cc#L266
aaknitt commented 2 years ago

@tadscottsmith I'm not sure I totally follow all of that, but would that theory align with the first WAV file after startup being too short, with some if its audio making its way into the start of the next file? It seems like samples from the transmission aren't making their way into the WAV file before it gets closed out.

tadscottsmith commented 2 years ago

My theory is that things are writing to the WAV file correctly, but the WAV files aren't always being closed correctly, so that multiple transmissions from the same recorder might be getting dumped into the same WAV file.

robotastic commented 2 years ago

Interesting! I could see something like that happening on a simulcast channel esp. Maybe the TDUs do not get inserted correctly. Let me go walk through that while I check out the Debug Logs.

On Nov 2, 2022, at 9:43 PM, tadscottsmith @.***> wrote:

My theory is that things are writing to the WAV file correctly, but the WAV files aren't always being closed correctly, so that multiple transmissions from the same recorder might be getting dumped into the same WAV file.

— Reply to this email directly, view it on GitHub https://github.com/robotastic/trunk-recorder/issues/731#issuecomment-1301553958, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3TXD5EXN3CHVEFPKTQKDWGMKCTANCNFSM6AAAAAARORC7HM. You are receiving this because you were mentioned.

robotastic commented 2 years ago

Checkout out the logs - it looks like there was only one TDU detected in the voice channel. The rest of the calls ended because they had timed out - which means that they were still recording but an update message hadn't been heard in the past 5 seconds. Let me go look at that code path and see if there is something weird there. It might be a less used code path.

tadscottsmith commented 2 years ago

I'd add a check similar to this. I know there are instances on my system where transmissions aren't ended correctly because there's never a TDU string that doesn't also contain voice fames. https://github.com/tadscottsmith/trunk-recorder/blob/29943bb6562afa9175e0a431286724e659128811/lib/op25_repeater/lib/p25_frame_assembler_impl.cc#L216-L218

aaknitt commented 2 years ago

I added in a second talkgroup to troubleshoot further. Some logs and recordings are attached.

It seems like things work pretty well for short transmissions. It's the long transmissions (which is what is always occurring on the paging talkgroup) that seem to cause issues.

However, even on with the short transmissions there's some funny business going on.

For example, Recorder 2 is used for the first transmission on TGID 58910 after startup. That transmission eventually times out. While it's in the process of timing out, Recorder 3 is spooled up for additional traffic in TGID 58910.

The next time that Recorder 2 is used, a TDU is detected immediately after the recorder is started. I think it's likely that this is actually the TDU from the end of the previous transmission that got stuck somehow?

The part that puzzles me is why Recorder 2 stops getting or processing data prematurely for the original call. It eventually times out, but its TDU seems to show up the next time that recorder is used.

I haven't really pieced all of this together (long recordings and misplaced TDUs) into a coherent theory as to what's actually going on yet, but I'm wondering if they're related as both seem to be examples of recorders ending processing of data prematurely, and that data ending up being processed the next time that recorder is used.

image

logs_recordings.zip

robotastic commented 2 years ago

TLDR; I made a small change and added some more logs. Prob won't fix things but may give some more clues.

Just paging through logs and taking note of interesting things. From that initial instance, it does look like Rec #2 just stopped for some reason, like there should be audio and it is not recording it. I have some Transmission start with a TDU though, when Rec 3 starts, it has a TDU initially, so things may not be getting stuck in the buffer, it may just be part of the P25 process:

[2022-11-04 18:24:49.461960] (info)   [TrunkedSystem]   41C  TG:      58910 Freq: 852.350000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 0

One weird thing is that the 1st call should have probably Concluded after this.... record_more_transmissions is set to false, I am going to go check the code:

[2022-11-04 18:24:50.180473] (info)   [TrunkedSystem]   37C  TG:      58910 Freq: 852.612500 MHz    Termination Voice Channel Flag - rec sample count 36000 pos: 36000 offset: 61927
[2022-11-04 18:24:50.180787] (trace)   [TrunkedSystem]  37C  TG:      58910 Freq: 852.612500 MHz    Spike Count: 3 pos: 36000 offset: 61927
[2022-11-04 18:24:50.181054] (trace)   [TrunkedSystem]  37C  TG:      58910 Freq: 852.612500 MHz    Error Count: 45 pos: 36000 offset: 61927
[2022-11-04 18:24:50.181257] (info)   [TrunkedSystem]   37C  TG:      58910 Freq: 852.612500 MHz    TERM - record_more_transmissions = false, setting Recorder More: false - count: 36000

OK - Checked the code... right now, I don't have a block of code checking for when a Call is Inactive (hasn't received a Call Update in > 1) and the Recorder has received a TDU. It looks like I have code to do that... no idea when I commented it out though!

https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/main.cc#L884-L903

It looks like the Control Channel is already ahead of the Voice Channel here. The Chan Grant comes in before the TDU on the voice channel. Interestingly, this cause the Source ID to be missed initially, since it is in the Grant

2022-11-04 18:24:38.783009] (debug)   tsbk00    Chan Grant  Channel ID:   257   Freq: 852.612500 MHz    ga   58910  TDMA -1 sa 6822658  Encrypt 0   Bandwidth: 12.5
[2022-11-04 18:24:38.783227] (trace)   nac 691 type 7 size 12 mesg len: 12
[2022-11-04 18:24:38.783373] (trace)   TSBK: opcode: $2
[2022-11-04 18:24:38.783552] (trace)   tsbk02   Grant Update    Channel ID:   245   Freq: 852.537500 MHz    ga   58524  TDMA -1
[2022-11-04 18:24:38.783852] (trace)   tsbk02   Grant Update    Channel ID:   245   Freq: 852.537500 MHz    ga   58524  TDMA -1tsbk02   Grant Update    Channel ID:   335   Freq: 853.100000 MHz    ga   58265  TDMA -1
[2022-11-04 18:24:38.784222] (trace)   nac 691 type 7 size 12 mesg len: 12
[2022-11-04 18:24:38.784360] (trace)   TSBK: opcode: $9
[2022-11-04 18:24:38.784475] (debug)   tsbk09: Telephone Interconnect Voice Channel Grant Update
[2022-11-04 18:24:38.785806] (trace)   nac 691 type 7 size 12 mesg len: 12
[2022-11-04 18:24:38.786395] (trace)   TSBK: opcode: $0
[2022-11-04 18:24:38.787532] (debug)   tsbk00   Chan Grant  Channel ID:   257   Freq: 852.612500 MHz    ga   58910  TDMA -1 sa 6822658  Encrypt 0   Bandwidth: 12.5
[2022-11-04 18:24:38.789733] (trace)   nac 691 type 7 size 12 mesg len: 12
[2022-11-04 18:24:38.790881] (trace)   TSBK: opcode: $2
[2022-11-04 18:24:38.791944] (trace)   tsbk02   Grant Update    Channel ID:   287   Freq: 852.800000 MHz    ga    1073  TDMA -1
[2022-11-04 18:24:38.793515] (trace)   tsbk02   Grant Update    Channel ID:   287   Freq: 852.800000 MHz    ga    1073  TDMA -1tsbk02   Grant Update    Channel ID:   245   Freq: 852.537500 MHz    ga   58524  TDMA -1
[2022-11-04 18:24:38.797463] (trace)   nac 691 type 7 size 12 mesg len: 12
[2022-11-04 18:24:38.797665] (trace)   TSBK: opcode: $2
[2022-11-04 18:24:38.798325] (trace)   tsbk02   Grant Update    Channel ID:   335   Freq: 853.100000 MHz    ga   58265  TDMA -1
[2022-11-04 18:24:38.798611] (info)   [TrunkedSystem]   37C  TG:      58910 Freq: 852.612500 MHz    Termination Voice Channel Flag - rec sample count 17280 pos: 17280 offset: 17280
[2022-11-04 18:24:38.798824] (trace)   [TrunkedSystem]  37C  TG:      58910 Freq: 852.612500 MHz    Spike Count: 6 pos: 17280 offset: 17280
[2022-11-04 18:24:38.799049] (trace)   [TrunkedSystem]  37C  TG:      58910 Freq: 852.612500 MHz    Error Count: 90 pos: 17280 offset: 17280
[2022-11-04 18:24:38.799267] (info)   [TrunkedSystem]   37C  TG:      58910 Freq: 852.612500 MHz    TERM - record_more_transmissions = false, setting Recorder More: true - count: 17280
[2022-11-04 18:24:38.982979] (info)   [TrunkedSystem]   37C  TG:      58910 Freq: 852.612500 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 17281
aaknitt commented 2 years ago

For what it's worth, I finally got around to building and running on my Ubuntu machine today, and got the same results.

Caveat is that while doing testing I was running two instances...one for my feed (running a much, much older build) while at the same time running a build of the debug-call-mismatch branch.

So processor load was higher than it would be normally, but the old build continued to perform normally while the debug-call-mismatch build had the same results that I'm seeing on the Pi (if anything, it's worse). Really puzzling why no one else seems to be seeing the same thing.

aaknitt commented 2 years ago

Here are logs from the latest build. This is just recording the single paging TGID. Let me know if it's more useful to collect data with just that single TGID or with another (non-paging) one also moving forward.

Edit: I just saw you added another commit around mid-day. The recordings attached are from the morning commit, I'll rebuild and try again.

6NovSingleTGID.zip

robotastic commented 2 years ago

yea - it is all sort of weird. On that one paging channel one clear issue is that there are no TDUs being received ... but that shouldn't matter too much, it should just revert back to the old way of doing things and time out after X number of seconds if it hasn't seen an Update message. It is good that it is also happening on the Ubuntu box. It at least means that is a general problem and not some weird Pi corner case... which might have explained why other people have not reported it.

The only other thing I can thing to try is turning on the debug for the control channel.

Change this line to 10: https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/recorders/p25_recorder_decode.cc#L86

And set this line back to 0: https://github.com/robotastic/trunk-recorder/blob/fe2eecbe1580db64cfe95be7bd7f2c21dd69045c/trunk-recorder/systems/p25_trunking.cc#L245

This should give us everything that is happening on the Voice Channel and maybe give us a sense of why it is doing that. Maybe...

I am going to try some experiments on my side too.

aaknitt commented 2 years ago

ok it's getting a bit weirder. With the latest build, Ubuntu now seems to be behaving while the Pi is still having issues. Starting to make me doubt my sanity. Log files for each are attached with some overlapping time periods. The call at 9:42 p.m. on TG 58909 (the paging talkgroup) is captured by both. The Ubuntu instance is recording 58909 (paging) and 58910 (dispatch), while the Pi is only capturing 58909.

Ubuntu.zip Pi.zip

robotastic commented 1 year ago

Hi @aaknitt I wanted to check back and see how things were looking. Is there any chance that the signal could be not tuned correctly? Trunk Recorder should still handle it better, but that might explain the changing behavior.

aaknitt commented 1 year ago

@robotastic sorry I've been out of town a bunch and haven't been doing any more digging on this lately. I should be able to get back into it if needed, but I've kind of lost track of what the current theory is and what additional data/logs are needed to help track it down.

The fact that I can turn the problem on and off by using different build versions makes me think that it's not a hardware/tuning issue, but may I'm misunderstanding what you're getting at.

Let me know what you need from me to keep the investigation moving.

robotastic commented 1 year ago

No problem - I sort of lost the bubble on it to. If you are able to give it one more try with the debug-call-mismatch branch, that would be great. It has some additional messages turned on and I disabled the verbose logging from the Control Channel.

Good point about different versions handling it differently. It probably is not a tuning thing... unless the newer version has some problems where if a message gets dropped it starts misbehaving.

(For future me - this looks like the correct link for comparing changes between the 2 versions: https://github.com/robotastic/trunk-recorder/compare/669ca00...71564fa )

One thing I noticed from the last batch you uploaded, the big difference between the Pi version and the Ubuntu one is that the Pi is not getting any Termination flags on the Voice Channel. The Recorder on the Pi Channel is abruptly stopped. I wonder if there is some decoding problem? Or maybe it is falling behind a ton?

Pi:

[2022-11-06 21:42:29.016069] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Removing call that has been inactive for more than 5 Sec  Rec last write: 1 State: 1
[2022-11-06 21:42:29.016196] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Concluding Recorded Call_impl - Last Update: 6s    Call_impl Elapsed: 30
[2022-11-06 21:42:29.016390] (info)   [TrunkedSystem]   793C - Stopping P25 Recorder Num [2] TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    TDMA: false Slot: 0 Hz Error: 494
[2022-11-06 21:42:29.016796] (trace)   stop_recording() - stopping wavfile sink but recorder state is: 1
[2022-11-06 21:42:29.016901] (info)   [TrunkedSystem]   793C TG:      58909 Freq: 852.800000 MHz    - Transmission src: 6800152 pos: 0 length: 17.388
[2022-11-06 21:42:29.034563] (trace)   Converting: recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_793.m4a
[2022-11-06 21:42:29.034778] (trace)   Command: sox recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_793.wav --norm=-.01 -t wav - | fdkaac --silent  -p 2 --moov-before-mdat --ignorelength -b 8000 -o recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_793.m4a -

Ubuntu:

[2022-11-06 21:42:24.011059] (trace)   wav - setting record_more to: false, State: idle sample count: 0
[2022-11-06 21:42:24.011146] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Setting Call state to INACTIVE last update: 2
[2022-11-06 21:42:24.011206] (info)   Recorder state: idle
[2022-11-06 21:42:24.011225] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Completing Call, its state is INACTIVE and its recorder state is STOPPED or IDLE
[2022-11-06 21:42:24.011252] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Concluding Recorded Call_impl - Last Update: 2s    Call_impl Elapsed: 25
[2022-11-06 21:42:24.011294] (info)   [TrunkedSystem]   199C - Stopping P25 Recorder Num [0] TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    TDMA: false Slot: 0 Hz Error: -1496
[2022-11-06 21:42:24.011333] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    - Transmission src: 6800152 pos: 0 length: 22.32
[2022-11-06 21:42:24.015309] (trace)   Converting: /home/andy/trunk-build-debug/recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_199.m4a
[2022-11-06 21:42:24.015362] (trace)   Command: sox /home/andy/trunk-build-debug/recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_199.wav --norm=-.01 -t wav - | fdkaac --silent  -p 2 --moov-before-mdat --ignorelength -b 8000 -o /home/andy/trunk-build-debug/recordings/TrunkedSystem/2022/11/6/58909-1667792519_852800000-call_199.m4a -

In the Pi Logs, there is only 1 Voice Termination Message:

[2022-11-06 21:01:25.588224] (info)   [TrunkedSystem]   462C TG:      58909 Freq: 852.537500 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 0

But on the Ubuntu Logs, there are > 1300!!

I have not idea what could be causing this....

robotastic commented 1 year ago

I also meant to add - In the Ubuntu version the recording stops when the Call has stopped receiving Call Updates on the Trunk Channel, because the Record has already received a Termination Flag on the Voice Channel.

With the Pi, the recording doesn't stop when the Call has stopped receiving Call Updates on the Trunk Channel and instead has to be forcefully stopped because the Recorder is still going 5 seconds past when this timeout happens.

In the Pi logs, this is where it should have stopped the recording:

[2022-11-06 21:42:24.980612] (trace)   setting silence_frame_count 0 to d_silence_frames: 0
[2022-11-06 21:42:25.015068] (trace)   wav - setting record_more to: false, State: recording sample count: 119232
[2022-11-06 21:42:25.015385] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Setting Call state to INACTIVE last update: 2
robotastic commented 1 year ago

Wait... I think this has to be the correct way to see what has changed: https://github.com/robotastic/trunk-recorder/compare/71564fa...669ca00

tadscottsmith commented 1 year ago

@robotastic can you add these lines to your debug-call-mismatch branch? It should help clarify whether the TDU is just not being received, or if the "terminate" tag wasn't added do to other issues.

https://github.com/tadscottsmith/trunk-recorder/blob/29943bb6562afa9175e0a431286724e659128811/lib/op25_repeater/lib/p25_frame_assembler_impl.cc#L216-L218

https://github.com/tadscottsmith/trunk-recorder/blob/29943bb6562afa9175e0a431286724e659128811/lib/op25_repeater/lib/p25_frame_assembler_impl.cc#L235

robotastic commented 1 year ago

@tadscottsmith good call - I had to change the messages around a little because the Call object is not available in the assembler. I also turned on the verbose on the Voice Channel.

robotastic commented 1 year ago

I wonder if this change could have anything to do with it. It is transmission_sink

image
tadscottsmith commented 1 year ago

That is a good find too! I added some code on my Pi instance and haven't seen any issues in the last 5+ hours where the tags seem to be different between the two, but I could be doing something wrong. Hopefully @aaknitt can build the latest debug-call-mismatch and get some additional logging.

  std::vector<gr::tag_t> old_tags;
  // pmt::pmt_t squelch_key(pmt::intern("squelch_eob"));
  get_tags_in_range(old_tags, 0, nitems_read(0), nitems_read(0) + noutput_items);
  get_tags_in_window(tags, 0, 0, noutput_items);

  bool old_terminate = false;
  bool new_terminate = false;
  for (unsigned int i = 0; i < old_tags.size(); i++) {
    if (pmt::eq(terminate_key, old_tags[i].key)) {
      bool old_terminate = true;
    }
  }
  for (unsigned int i = 0; i < tags.size(); i++) {
    if (pmt::eq(terminate_key, tags[i].key)) {
      bool new_terminate = true;
    }
  }

  if(old_terminate != new_terminate)
  {
    char formattedTalkgroup[62];
    snprintf(formattedTalkgroup, 61, "%c[%dm%10ld%c[0m", 0x1B, 35, d_current_call_talkgroup, 0x1B);
    std::string talkgroup_display = boost::lexical_cast<std::string>(formattedTalkgroup);
    BOOST_LOG_TRIVIAL(error) << "[" << d_current_call_short_name << "]\t\033[0;34m" << d_current_call_num << "C\033[0m\tTG: " << formattedTalkgroup << "\tTAG MISMATCH - Old Terminate: " << old_terminate << " New Terminate: " << new_terminate;
  }
robotastic commented 1 year ago

oh wait! It looks like all of the Termination log messages on the Ubuntu logs were from the other TG 58910:

[2022-11-06 21:15:58.515647] (info)   [TrunkedSystem]   12C  TG:      58910 Freq: 853.100000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 115282

The problem could just be with P25 Voice Channels that do not have TDUs. I still don't understand why it would work in one version and not the other though....

tadscottsmith commented 1 year ago

It looked like the 21:41:59 call had TDUs on Ubuntu:

[2022-11-06 21:15:02.757949] (info)   [TrunkedSystem]   Decoding System_impl ID 692 WACN: BEE00 NAC: 691
[2022-11-06 21:41:59.070157] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Starting P25 Recorder Num [0]  TDMA: false Slot: 0 QPSK: true
[2022-11-06 21:41:59.128725] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 158550
[2022-11-06 21:41:59.237438] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 158551
[2022-11-06 21:41:59.619231] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Starting new Transmission   Src ID:  6800152
[2022-11-06 21:42:22.229337] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 178560 pos: 178560 offset: 337112
[2022-11-06 21:42:22.229500] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    TERM - record_more_transmissions = true - count: 178560
[2022-11-06 21:42:22.229567] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    TERM - Recorder State is: IDLE
[2022-11-06 21:42:22.338149] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337113
[2022-11-06 21:42:22.447429] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337114
[2022-11-06 21:42:22.557101] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337115
[2022-11-06 21:42:22.666173] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337116
[2022-11-06 21:42:22.775283] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337117
[2022-11-06 21:42:22.884803] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337118
[2022-11-06 21:42:22.994968] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337119
[2022-11-06 21:42:23.103037] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337120
[2022-11-06 21:42:23.212872] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337121
[2022-11-06 21:42:23.321745] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337122
[2022-11-06 21:42:23.430553] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337123
[2022-11-06 21:42:23.540802] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337124
[2022-11-06 21:42:23.649349] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337125
[2022-11-06 21:42:23.757909] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337126
[2022-11-06 21:42:23.867570] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337127
[2022-11-06 21:42:23.977390] (info)   [TrunkedSystem]   199C TG:      58909 Freq: 852.800000 MHz    Termination Voice Channel Flag - rec sample count 0 pos: 0 offset: 337128
[2022-11-06 21:42:24.011146] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Setting Call state to INACTIVE last update: 2
[2022-11-06 21:42:24.011225] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Completing Call, its state is INACTIVE and its recorder state is STOPPED or IDLE
[2022-11-06 21:42:24.011252] (info)   [TrunkedSystem]   199C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Concluding Recorded Call_impl - Last Update: 2s    Call_impl Elapsed: 25

But on the PI, the same call didn't log them:

[2022-11-06 20:07:21.124158] (info)   [TrunkedSystem]   Decoding System_impl ID 692 WACN: BEE00 NAC: 691
[2022-11-06 21:41:59.048728] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Starting P25 Recorder Num [2]  TDMA: false Slot: 0 QPSK: true
[2022-11-06 21:41:59.170756] (info)   [TrunkedSystem]   793C TG:      58909 Freq: 852.800000 MHz    Starting new Transmission   Src ID:  6800152
[2022-11-06 21:42:25.015385] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Setting Call state to INACTIVE last update: 2
[2022-11-06 21:42:29.016069] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz     Removing call that has been inactive for more than 5 Sec  Rec last write: 1 State: 1
[2022-11-06 21:42:29.016196] (info)   [TrunkedSystem]   793C TG:      58909 (                 All Pa)   Freq: 852.800000 MHz    Concluding Recorded Call_impl - Last Update: 6s    Call_impl Elapsed: 30

Interestingly, it even misses the couple of TDUs before the HDU/LDUs that start the new transmission.

robotastic commented 1 year ago

Ah! - I must have messed up the search. Good find! So that is one obvious difference. The question is whether there was a change that stopped the TDUs from being generated/processed or if there was a change in the code that makes it fail when TDUs don't come in.

tadscottsmith commented 1 year ago

I can't seem to get the hyperlink quite right, but there are a few instances of call->stop_call(); in main.cc that seemed to be commented out in the new version. I'm having a hard time stepping all the way through it, but any chance they were closing up the previous WAV file when new calls came in?

robotastic commented 1 year ago

Interesting - I had to go through the code to refresh my memory. It looks like call->stop_call(); doesn't change things much. It just sets the Call's State based upon whether the recorder has reach a termination flag yet or not. This gets handled anyhow in the manage_calls() loop, so it wasn't need anymore.

aaknitt commented 1 year ago

Oh boy.

I think I'm going to owe everyone some beer for all of the collective time spent on this.

I spooled up another Pi from scratch and that one is working perfectly on both master and debug-call-mismatch

I swapped SDRs, power supplies, and SD cards between the two Pis to try to isolate the issue. The problem follows the SD card, nothing else. However, I suspect that it may not be an issue with the physical SD card but rather something with the OS/build configuration on that card. I say this because I did get the problem to occur on Ubuntu once (but was unable to reproduce it). Hard to say for sure though.

Still puzzling to me is that I could turn the problem off and on on the troublesome Pi/SD card by switching between older and newer versions.

I'd love to be able to home in on the root cause so that others can avoid it moving forward, so if anyone has any ideas on what might be different in the OS, Pi settings, etc. I'm open to doing more experimentation. However, since this doesn't seem to be widespread and doesn't duplicate/reproduce easily, I'm leaning towards closing the issue for now.

Edit: Note that I had the same issue on the troublesome Pi/SD card when running via Docker. Not sure what that means for isolating the issue but it's pertinent, I think.

sally-yachts commented 1 year ago

I think compared to other resources, disk performance is one that hasn't historically been inspected very closely and it would be a good idea to try and get a handle on what performance TR needs before things start going sideways.

It would be nice if we could configure a work/temp directory for active recordings; this would let us mount some small but quick storage/a ramdisk for the performance-sensitive task and then move it to slower storage once the call has been concluded. Just spitballing here of course.

robotastic commented 1 year ago

No worries at all @aaknitt - it was a real good exercise and always helpful for me to go back and check things. This is definitely really weird either way! Just to double, was the GNU Radio version different on any of the installs? you can check with gnuradio-config-info -v

That is a great point @sally-yachts ! It is probably most noticeable on SD cards, which I think can be pretty slow... but if you have a lot of recorder writing at once, that could also hit a disk io threshold. Having the documentation for using a Ramdisk is good idea. Especially if you are just uploading everything and not planning on storing it locally long term.

jessm33 commented 1 year ago

I have the same issue, but it only seems to effect my system when I reduce the bandwidth on my sdr's. At 2.3 mhz or 2.56mhz I don't have the issue, I drop the bandwidth to 2mhz or below I start having this issue. I'm also running decently powerful system, a AMD Ryzen 5 PRO 5650GE Processor, and a m.2 pcie ssd, using docker on ubuntu 22.

aaknitt commented 1 year ago

@jessm33 is on to something here. I think I've got my "problem" SD card behaving by messing with the sample rate. Currently working to tease out whether it's an issue with the sample rate itself or related to which frequencies in the system are covered by which SDR, how close a frequency is to edge of SDR BW, etc.

@jessm33 can you post the details of your SDR config (center frequency & BW) for when it has the issue and when it doesn't, along with the frequencies of the system you're monitoring?

jessm33 commented 1 year ago

@aaknitt I'm monitoring 2 systems:

Fairfax County: https://www.radioreference.com/db/sid/6957 MWAA: https://www.radioreference.com/db/sid/10697

This issue is very noticeable on the Fairfax County system, as fire dispatches are announced by an automatic voice on 2 different talk groups, each dispatch is at least 30 seconds long, and are very easy to detect when they end up in the wrong talk groups recording. Active control channel on Fairfax County is 856.262500 MHz.

Working centers & rates:

            "center": 851803125,
            "rate": 2560000,
            "center": 854890625,
            "rate": 2560000,
            "center": 857434375,
            "rate": 2560000,
            "center": 859759375,
            "rate": 2560000,

Not working centers & rates:

            "center": 851803125,
            "rate": 1531256,
            "center": 854590625,
            "rate": 1606256,
            "center": 856759375,
            "rate": 1618752,
            "center": 859259375,
            "rate": 2018752,
tadscottsmith commented 1 year ago

@jessm33 Could you run the debug branch with the not working rates? It'll be pretty verbose logging, but if you could capture the logs surrounding an instance of it happening, it might really help.

https://github.com/robotastic/trunk-recorder/tree/debug-call-mismatch

aaknitt commented 1 year ago

I'll post more details later tonight, but there seems to be some correlation with choosing a sample rate that results in getting this message on startup and having the issue being present in the newer versions:

[2022-12-11 22:28:51.253444] (info) [oasiswauk] Started with Control Channel: 853.712500 MHz [2022-12-11 22:28:51.254846] (error) p25 trunking Decim: Nothing found [2022-12-11 22:28:51.255042] (info) P25 Trunking single-stage decimator - Initial decimated rate: 0 Second decimated rate: 0 Initial Decimation: 80 System Rate: 2012504

On the other hand, if I choose a sampling rates that produces this message on startup, the issue does not seem to be present:

[2022-12-11 13:17:59.420498] (info) P25 Trunking two-stage decimator - Initial decimated rate: 100000 Second decimated rate: 25000 FA: 6250 FB: 12500 System Rate: 1400000 [2022-12-11 13:17:59.439656] (info) P25 Trunking ARB - Initial Rate: 1400000 Resampled Rate: 25000 Initial Decimation: 14 System Rate: 24000 ARB Rate: 0.96 On the older (4.3.2 release) versions, the issue is not present even with the "single state decimator" message on startup. I could be chasing ghosts here as this is correlation not causation. I'll most more details later.

Sample rates that seem ok include 0.9 MHz, 1.4 MHz, 2.4 MHz. An example of a sample rate that's problematic for the newer versions but not the older versions is 2.212504 MHz.

taclane commented 1 year ago

Sample rates that seem ok include 0.9 MHz, 1.4 MHz, 2.4 MHz. An example of a sample rate that's problematic for the newer versions but not the older versions is 2.212504 MHz.

The two-stage decimator should kick in when a sample rate is an even multiple of 24000, 25000, or 32000. Do things seem ok if you try 2050000 or 2016000?