windytan / redsea

Command-line FM-RDS decoder with JSON output.
MIT License
406 stars 38 forks source link

Noise issues #30

Closed windytan closed 7 years ago

windytan commented 8 years ago

I've gotten some reports that redsea requires a stronger FM signal than other decoders. Possible reasons discussed here.

1) rtl_fm

There's jitter in the 57 kHz PLL (realized as nco_crcf_pll_step in liquid-pll), especially when the signal is noisy.

Below, the PLL tracks a a good-quality RDS subcarrier. 99.9 % of blocks were received. Time is in seconds.

plot

Here's a noisy signal, with 60.1 % of all blocks received.

plot

Average spectral power of the two signals, good signal in green and the noisy one in red:

plot

Looking at the graph, there's a 27 dB difference in SNR. Is it realistic to receive error-free data in the noisy case?

3) Symbol synchronizer

mvglasow commented 7 years ago

My setup’s slightly different, but based on the same code and thus likely to suffer from the same issues: audio samples are obtained with C code based on rtl_fm, which are then processed into a RDS binary stream using a Java implementation based on an early (ca. April 2015) version of redsea. This is implemented as a plugin to RDS Surveyor, which does all the higher-level RDS stuff, in addition to visualizing signal strength and RDS block error rates. The plugin also downsamples audio to 48,000 Hz and plays it through the sound card—helpful because the amount of static is a rough indicator for signal quality.

I use a Logitech VG0002A (FC0013 tuner), no extra shielding, hooked up to my roof antenna. OS is Ubuntu MATE 16.04 (64-bit), running on an Intel Core2 Duo P8400 @ 2.26GHz × 2. For reference, I also use a Si4703-based dongle, i.e. a dedicated RDS FM tuner, wrapped in aluminium foil for shielding, with the same software setup.

I notice the Si4703 picks up fewer stations than the RTL2832, though RDS is usually crisp with hardly any block errors. With the RTL2832 occasionally I get good RDS data, other days things look quite messy. The antenna setup in my house probably isn’t the greatest either, I am also having issues with analog radio reception.

My observations:

windytan commented 7 years ago

Thanks, great observations! Note that the current versions, based on liquid-dsp, are much more noise-resistant and efficient; CPU load is at 0.8 % on my 2.8 GHz Intel Core i7.

mvglasow commented 7 years ago

Thanks for the heads-up; this is where my implementation differs, as I needed the carrier demodulation part to be in pure Java and thus used Java DSP Collection. It’s a lot less feature-complete than liquid-dsp; modernizing that part of the code will take some more research.

I just enabled csv stats on my implementation and monitored the PLL frequency. On a good sample, it jumps about wildly for about a quarter of a second, then goes to 57002 and slowly decreases to around 56997.5 with minimal oscillation, though that takes a few seconds.

Sometimes, after things have stabilized, I have brief periods of mostly bad samples, or even temporary interruptions, which quickly return to normal. These seem to correspond to the frequency diverging from its stable value and then returning in the stats.

I’m wondering if the pilot tone is subject to similar jitter. If the pilot tone remains stable while the PLL frequency is all over the place, it may be a sign that the PLL is doing weird things (or something’s specifically messing up the frequency range of the subcarrier). If there is similar jitter in the pilot tone—I’d expect the changes in the PLL frequency to slightly lag behind—we may have some issue that affects all frequencies and the PLL jitter is just a symptom of it. Sporadically dropped samples come to mind, or maybe jitter in the sample rate.

In the latter case, it would seem logical to rely on the pilot tone as a tuning reference. However, I see that you dropped pilot tone recovery in e4edd12—what was the motivation for that? Did you observe any changes in noise resilience before and after?

windytan commented 7 years ago

Pilot tone recovery was dropped because not all RDS-carrying stations are guaranteed to have a pilot tone. Such a station is, for instance, a local station here that is monaural and thus has no stereo subcarrier or pilot tone, yet transmits a PS name via RDS.

It could be possible to detect the presence of a pilot tone and base our selection of clock base on that; or use a command-line switch. Afterall, as per the RDS standard, the RDS subcarrier is supposed to be locked to the third harmonic.

I didn't get to test the effect it had on noise resilience as I didn't have the test scripts that I now have.

mvglasow commented 7 years ago

Interesting—in fact I was wondering the other day if the pilot tone is present on monaural stations which transmit RDS.

Another thing I noticed here: when I turn the dial and pick up some noise before tuning into a good station, it takes a very long time before I get some RDS data. In one instance, I spent some 2–3 minutes listening to static before turning into the strongest local station here, no RDS until I gave up 5 minutes later. Analysis showed that for most of the latter part, the PLL remained stubbornly locked to some 56600 Hz, with remarkable stability.

Conclusion: Noise can profoundly throw the PLL off, an effect which seems to increase with the duration of the noise, and it can take a long time to recover even when a good signal is received again.

Another approach might be to run an FFT on a DFT on the relevant frequency ranges of the baseband signal and look for the pattern of the RDS subcarrier around 57 kHz—we should see two peaks, some 1.65 kHz apart, and a local minimum in the middle, which would be the subcarrier frequency. If we don’t see a clear RDS subcarrier pattern, we’re probably listening to a non-RDS station (or even plain static) and should leave the subcarrier frequency alone. When we detect the RDS subcarrier again after having lost it, re-initialize the PLL with the default frequency.

While we’re at it, we could check for both the pilot tone and the subcarrier and, if both are present, see if they agree. Since the signal level of the pilot tone is higher than that of the RDS subcarrier (10% vs. 5%), I’d expect the pilot tone to have better noise resilience.

mvglasow commented 7 years ago

Results are discouraging.

I gave up on pilot tone recovery, as it saturated my CPU, negating any potential improvements in reliability. Maybe redsea lends itself better to that approach.

The DFT approach doesn’t seem to work—unless I’ve made an error somewhere. I collected two seconds of samples (which should give me 0.5-Hz frequency bins), then ran a DFT at 17/19/21 kHz as well as at 54/56.175/57/57.825 kHz and looked for the peak pattern.

No matter whether I was listening to a good station or static, the DFT analysis kept flipping happily between detecting and losing the pilot tone and/or the RDS subcarrier (independently of each other). Frequently, on the good station, DFT analysis would tell me it had lost the subcarrier but the decoder would happily continue spitting out data.

mvglasow commented 7 years ago

A few more ideas:

I see the signal level of the good MPX looks more “balanced” than that of the noisy one, which looks more jagged, with occasional peaks. Possibly as a result of those, AGC gain is lower. I wonder if it’s possible to set the AGC to a more “aggressive” setting (higher gain) and how this would affect the PLL and everything after.

Speaking of gain control—when I mentioned gain control in software, I meant “determine in software what gain value to set the tuner to”. For redsea, that would mean controlling rtl_fm parameters.

Frequency correction in the “homebrew” PLL looked like this: fsc -= 0.5 * pll_beta * d_phi_sc; I’m by no means an expert on signals, but would it be possible to factor signal quality into that equation, as a kind of confidence level, which would be lower when noise is detected? (And would this still be possible with liquid-dsp?)

As for detecting noise, I just worked something out for an FM seek algorithm and am quite happy with the results: Because RSSI is a poor indicator of a valid FM station (there are weak transmitters as well as strong noise), I analyzed the demodulated spectrum up to the RDS subcarrier frequency. I ran an FFT across the spectrum, then determined the average power level (in dB) and mean average deviation. For noise, mean average deviation is lower than for a good station but the average power level is higher. By experimenting with some good and some bad signals I established a threshold for both values, then did a simple weighted difference, with weights chosen so that the result would be zero for the threshold. The greater the value, the better the signal. Maybe that’s a basis for a confidence level.

To work around issues with noise detuning the PLL beyond recovery, I ended up monitoring the PLL frequency and allowing it to drift within +/-14 Hz of the subcarrier frequency. (7 Hz is the tolerance as per the specs, which I doubled to allow for similar inaccuracy on the receiving end). When the frequency goes outside that range, I simply reset it to 57 kHz—admittedly not the most elegant solution, but it dramatically improved behavior in scanning scenarios, altering between good stations, noisy stations and pure static.

windytan commented 7 years ago

PLL drift/detuning is no longer a problem, now that liquid is in use. The only remaining issue there is jitter with very noisy signals, as in the pictures above, but I'm not even sure if that causes any degradation in quality.

As for the homebrewn PLL design in very old versions of redsea, I'm kind of reluctant to comment on that with no background in theory myself.

windytan commented 7 years ago

After testing the current version of redsea against RDS Spy on a noisy MPX signal, it seems redsea performed better, recoving around twice as many groups. So I guess noise should not be a problem any more.