Open mryndzionek opened 1 month ago
Yes, although the first step in the DSP processing is DC removal, there is still lots of noise close to DC (e.g. LO being received through Tayloe detector). The HIGH_PASS_FILTERING option does a decent job of removing it if enabled, but it doesn't improve the performance of the receiver, so I don't think that the additional CPU loading is justified.
Since we are using a low IF, the DC noise is well outside the pass-band of the FFT filter so is completely removed before it reaches the demodulator. The only advantage of applying a high pass filter before the frequency shift is that the DC noise doesn't show up in the spectrum scope.
If we did want to remove the DC noise, it would be far more efficient to remove it in the frequency domain (e.g. by setting those frequency bins to zero).
The only advantage of applying a high pass filter before the frequency shift is that the DC noise doesn't show up in the spectrum scope.
Okay, but this is important, no? Otherwise the spectrum does a poor job at aiding tuning.
And am I seeing it paint negative frequencies in the first quarter (pixels 0..~32) of the spectrum? It does however look quite pretty with a TinySA injecting a 6kHz span sweep...
There are a few issues with the spectrum display, some are easily fixed others are a little more tricky.
The DC noise could be removed (preferably in the frequency domain), but it would leave a blind-spot in the spectrum where signals will "disappear" while tuning. Either way the area of the spectrum where the local oscillator is tuned isn't very useful. I don't think this problem is unique to this project. The DC noise is clearly visible in the spectrum scope of mcHF for example. https://ka7oei.blogspot.com/2015/05/adding-waterfall-display-to-mchf.html
The noise floor is very high in the spectrum scope, signals which are perfectly audible to the receiver can still be well below the noise on the spectrum scope. Its because we are only performing an FFT on 1 block of data captured once every 100ms (ish). I have fixed this in the TFT waterfall build, now that FFTs are being performed in real-time we can integrate the noise from all the FFT frames over a 100ms period. This makes a massive difference to the usefulness of the spectrum scope and most audible signals are now visible.
I'm keeping the "tuned" frequency in the middle of the display, but the offset of the local oscillator is roughly +/-6kHz (I chose whichever is closest). This means that 6KHz of spectrum wraps around to the opposite end of the display. In the mcHF, the local oscillator is always +6kHz, in the image you can see that spectrum is centred in the display, and the tuned signal is always offset 6kHz to the right of the display.
The alternatives I can think of would be:
I'm not entirely sure which option is the best, I think 2 or 4 would be more visually pleasing. Option 2 would give better receiver sensitivity at the expense of reduced frequency range. Option 4 would have the advantage that the DC noise would always be in approximately the same place.
I would say option 2 is the way to go.
2. The noise floor is very high in the spectrum scope, signals which are perfectly audible to the receiver can still be well below the noise on the spectrum scope. Its because we are only performing an FFT on 1 block of data captured once every 100ms (ish). I have fixed this in the TFT waterfall build, now that FFTs are being performed in real-time we can integrate the noise from all the FFT frames over a 100ms period. This makes a massive difference to the usefulness of the spectrum scope and most audible signals are now visible.
Where is this "TFT waterfall build"? Can we fix this on testing
?
I noticed the "side lobes" popping up while tuning and I think this is due to DC-bias. I also noticed the code guarded by the
HIGH_PASS_FILTERING
define inrx_dsp.cpp
. I does seem to help, but why is it not enabled by default? Improvements are planned, are there other reasons?