aido / RadioClock

Noise resilent radio clock decoder library for Arduino
GNU General Public License v3.0
2 stars 0 forks source link

Comments #1

Open udoklein opened 9 years ago

udoklein commented 9 years ago

Hi Aido,

as suggested in the latest comment here: http://blog.blinkenlight.net/experiments/dcf77/dcf77-library/ I write my comments here.

1) I think your separation of the decoders vs. the framework was definitely a good idea. I think I will provide some other split to also isolate the timer code from the rest. You do not need to care about this. I will implement that.

2) If possible switch your editor to 4 spaces for indentation as this makes diffing slightly simpler.

3) I think for optimal results the phase binning should be moved from the generic code to the specific code. Probably it works anyway but as it is now it is optimized from DCF77. For MSF a slightly differnt kernel might be more appropriate. However this is a priority 2 or 3 issue. Experience shows that the phase lock usually has _lots_ of headroom compared to the other locks. This can wait.

4) As you properly noticed the sync mark binnig is tricky. From a mathmatical point of view the decoder can extract bit vectors of 5 bits each second. Each bit corresponding to a 100ms timeslot. The signal has a lot of known bits and some unknown bits. What we need is to compare bit by bit the known signal with the received signal and sum this up (exclusive or and then sump up the bits). Since the received signal comes in bit by bit we can compute this successively. Of course we have to do this for all 60 possible shifts of the minute signal. So with each bit that comes in we compute the result and distribute it to the 60 bins. In addition we can use some tricks. That is: since almost every minute has 60 seconds it does not matter if we look for 0s or 1s. So in the end I would just look at the 1s and then increment each bin that has a suitable offset.

So if the bit vector is x,a,b,d (for the first 500ms of a second) then increment the bin with offset 0 (to the phase) once if a==1, once more if b==1, ... once more if d==1. In addition if a==1 decrement each of the bins at offsets 1-16. In addition if b==1 increment each of the bins at offsets 53-58 and decrement the bins at offset 52 and 59.

Ignoring the 0s does not make any difference as this only doubles the results. There is no additional information gain in distributing the zeroes. This distribution trick saves a lot of CPU cycles.

Tell me your results.

Cheers, Udo

P.S. I ordered an MSF module such that I can also test.

udoklein commented 9 years ago

Hi Aido,

I suggest the following comment in front of the phase_detection() function:

// According to Wikipedia https://en.wikipedia.org/wiki/Time_from_NPL#Shortcomings_of_the_current_signal_format
// the signal will show 100 ms without carrier ("1"), then 100 ms for bit A and 100 ms for bit B
// no carrier translates to "1", carrier translates to "0". The signal will end with 700 ms of carrier "0".
// The only exception is the first minute which has 500 ms of no carrier and then 500 ms with carrier.
// This implies: 59x"1" and  1x"0" for the first 100ms
// Then           7x"1" and 18x"0" for the second 100 ms as well as 35 data bits
// Finally        1x"1" and 37x"0" for the third  100 ms as well as 22 data bits

// This gives a rough estimate of
// 59x"1" and  1x"0" for the first 100 ms
// 34x"1" and 36x"0" for the second 100 ms (notice that "1" is slightly less likely to appear
//  8x"1" and 42x"0" notice that the dut bits have only 25% of probability for a 1 as 50% must always be zero --> count 12x"0" and 4x"1"

// as a consequence the filter kernel is suitable for both DCF77 and MSF

My conclusion is that the filter kernel is not optimal for MSF but definitely more than good enough. With other words: this can not only wait, this is OK.

On the other hand if you do not mind the CPU utilization a proper kernel would be:

void phase_detection() {
    // We will compute the integrals over 200ms.
    // The integrals is used to find the window of maximum signal strength.
    uint32_t integral = 0;

    for (uint16_t bin = 0; bin < bins_per_100ms; ++bin) {
        integral += ((uint32_t)bins.data[bin])*59;
    }

    for (uint16_t bin = bins_per_100ms; bin < bins_per_200ms; ++bin) {
        integral += (uint32_t)bins.data[bin]*34;
    }

    for (uint16_t bin = bins_per_200ms; bin < bins_per_300ms; ++bin) {
        integral += (uint32_t)bins.data[bin]<<3;
    }

    bins.max = 0;
    bins.max_index = 0;
    for (uint16_t bin = 0; bin < bin_count; ++bin) {
        if (integral > bins.max) {
            bins.max = integral;
            bins.max_index = bin;
        }

        integral -= (uint32_t)bins.data[bin]<<1;
        integral += (uint32_t)(bins.data[wrap(bin + bins_per_100ms)]*(59-34) +
                               bins.data[wrap(bin + bins_per_200ms)]*(34-8) +
                               bins.data[wrap(bin + bins_per_300ms)]<<3);
    }

    // max_index indicates the position of the 200ms second signal window.
    // Now how can we estimate the noise level? This is very tricky because
    // averaging has already happened to some extend.

    // The issue is that most of the undesired noise happens around the signal,
    // especially after high->low transitions. So as an approximation of the
    // noise I test with a phase shift of 200ms.
    bins.noise_max = 0;
    const uint16_t noise_index = wrap(bins.max_index + bins_per_200ms);

    for (uint16_t bin = 0; bin < bins_per_100ms; ++bin) {
        bins.noise_max += ((uint32_t)bins.data[wrap(noise_index + bin)])*59;
    }

    for (uint16_t bin = bins_per_100ms; bin < bins_per_200ms; ++bin) {
        bins.noise_max += (uint32_t)bins.data[wrap(noise_index + bin)]*34;
    }

    for (uint16_t bin = bins_per_200ms; bin < bins_per_300ms; ++bin) {
        bins.noise_max += (uint32_t)bins.data[wrap(noise_index + bin)]<<3;
    }
}
udoklein commented 9 years ago

I had a look at you sync_mark kernel. It might work. However a better kernel would be to use

    //  1) A sync mark will score +10 points for the current bin
    //  2) A "1" in bit A will score +1 points 52 and 59 bins back
    //  2b) A "0" in bit A will score +1 points for bins 1-16 back as well as 53 and 58 back
    //  3) A "0" in bit B will score +1 points for bins 17-52 and 59.

There is no real need to even out the bits. However since this kernel has a larger l_1 norm than the DCF77 kernel it requires to change from uint8_t bins to uint16_t bins. 60 bytes of additional memory :( This could be somewhat reduced by only looking at some of the known bits as you did. The ultimate criterion would be to actually test it under noisy conditions and see if you more lightweight kernel suffices. In the end we have to decide between noise tolerance and memory footprint.

aido commented 9 years ago

Hi Udo,

On your advice I've made some changes to the sync_mark kernel.

I made a slight change to 1) above.

1) A sync mark will score +8 points for the current bin and -2 points for 1 to 59 back.

and added

4) A "1" in bit A will score +1 points 53-58 bins back

Maybe this uses more processor cycles than necessary.

Also, maybe I'm confused but did you mix up the bit values above? i.e.

    //  2) A "1" in bit A will score +1 points 52 and 59 bins back
    //  2b) A "0" in bit A will score +1 points for bins 1-16 back as well as 53 and 58 back

should be:

    //  2) A "0" in bit A will score +1 points 52 and 59 bins back
    //  2b) A "0" in bit A will score +1 points for bins 1-16 back
udoklein commented 9 years ago

Mathematically it is equivalenmt to just score 2*59+8 points for the sync marc and leave the other bins alone.

with regard to (2): yes , I was wrong, you are right.

aido commented 9 years ago

Hi Udo,

I've been tweaking the MSF_Demodulator::decode_interesting() code but cannot seem to lock on to the minute sync marker.

Any ideas?

udoklein commented 9 years ago

My standard approach is to isolate the code in question, e.g. copy the decode_interesting() into a separate sketch. Then feed it with a "synthetic" signal and see if it locks. The point is: if you feed it outside of an ISR you can serial.print() at ANY point without fear of locking up the controller.

In the end I unit test the most tricky stuff.

Besides that I see no obvious issue. But this does not mean a thing.

aido commented 9 years ago

Hi Udo,

I tweaked the lock_threshold value and eventually got a lock. It just takes a very long time (several hours with a perfect signal). So certain values may need to be tweaked further.

Tweaks aside, I am happy now that the MSF code is more or less working and I am able to get the correct time. A bit more testing may be needed as I haven't checked flags like uses_summertime etc.

Oh, and I nearly forgot. I have temporarily replaced:

    bool zero_provider() {
        return 0;
    }

with:

    bool zero_provider() {
        const bool sampled_data =
        0 ^ (1? (analogRead(5) > 200)
        : digitalRead(A5));

        digitalWrite(A4, sampled_data);
        return sampled_data;
    }

until I figure out how to get clock working with former

udoklein commented 9 years ago

On 06/02/15 23:09, Aido wrote:

Hi Udo,

I tweaked the |lock_threshold| value and eventually got a lock. It just takes a very long time (several hours with a perfect signal). So certain values may need to be tweaked further.

Tweaks aside, I am happy now that the MSF code is more or less working and I am able to get the correct time. A bit more testing may be needed as I haven't checked flags like uses_summertime etc.

— Reply to this email directly or view it on GitHub https://github.com/aido/RadioClock/issues/1#issuecomment-73321980.

Hi Aido,

if the signal is clean it should take at most 15 minutes. So if it takes several hours there is something wrong with one of the filter stages. But at least it is now clear that the setup will eventually work as desired. Good work.

Best regards, Udo

P.S. My ARM port does now basically work. My next step is to investigate how to leverage the huge amount of memory that I have with the ARM controller.