cnlohr / esptracker

Lighthouse-based tracker using the TS4231
MIT License
60 stars 9 forks source link

Coming up with a tracking algorithm #4

Open cnlohr opened 6 years ago

cnlohr commented 6 years ago

This is sort of directed at @r2x0t - though it is open to anyone else. I have in mind how I want to do my demanchestration, and it will be similar to how I did it with my ethernet projects. It can be performed on a simple microcontroller in real-time without the need for post-processing since it's sooo slow (as compared to the 10BASE-T Ethernet demanchestration I was performing before) The basic description is found here: https://hackaday.com/2016/04/01/ethernet-controller-discovered-in-the-esp8266/

But, that gives us a bit stream. 1's and 0's that might be lossy or rough. The next step is finding which polynomial is used. @r2x0t's algorithm looks like it only only checks bits where there are taps. My brother Andrew recommended feeding a bitstream in and verifying bits after the stream to check, this has the benefit of effectively checking the full width of the polynomial, all 17 bits, instead of just the tapped bits. Either way, we can probably determine which polynomial is incoming without much heartache.

Here comes the tricky bit: How do we determine where in the stream we are? I.e. from a base polynomial, what is our stream's "t" from base start? This seems like both the most critical and most difficult operation. Does anyone know any shortcuts or techniques? I am still haunted by my CDMA book's first sentense in chapter 6: "Now we come to the most difficult part: synchronization" One could use a LUT but it would be a large lut for a microcontroller, as you would need 128k words, each word being at least 17 bits in width. Surely there is a better trick that could be done here.

r2x0t commented 6 years ago

As you know how to do the manchester decoding, I will start after that:

My brother Andrew recommended feeding a bitstream in and verifying bits after the stream to check, this has the benefit of effectively checking the full width of the polynomial, all 17 bits, instead of just the tapped bits.

If you mean checking all bits in the shift register against some known copy, that's very impractical as it means having lookup table of all possible combinations for every polynomial used. If you are planning in implementing this in any memory-limited uC, only way forward is to use properties of LFSR and detect it using taps as I explained before. It's fast and doesn't need any lookup tables.

Here comes the tricky bit: How do we determine where in the stream we are? I.e. from a base polynomial, what is our stream's "t" from base start?

Actually once detected, this is quite easy to do. Copy found sequence from shift register to a new one and shift it using correct taps for the found sequence (running as LFSR sequence generator) until you reach known shift register value that is at the end of the data burst. Number of shifts then equals distance from detected bits to the end of the burst.... and you got the original LFSR sequence position. It would be also possible to run LFSR generator backwards to the start, result would be same in the end.

Nice thing about this is you can do this in two parts: 1) during the sweep, you collect all known LFSR hits. For each LFSR, you only need to keep the shift register value of the strongest signal or just longest run of valid LFSR bits (ie. should be strongest signal. Would be better to also add some signal strength measurement, as for best precision we will want exactly middle of the time when the beam strikes the sensor). 2) After the sweep, you run LFSR generator for each found poly until you reach known value that's at the end of the burst. As a result you will have exact distance for each detected LFSR to the end (and so also from the start) of the data burst. Again, no lookup tables necessary, just bit shift and basic operations. Main idea behind this two step processing is you don't want to waste you CPU resources on possible reflections (finding the position means running shift register 100000+ times). So it makes sense to do it only once per lighhouse sweep and only for strongest signal detected.

I will write some example detection code tomorrow, just to see how it would work and post it here. Something to take bitstream from stdin and find all LFSR sequences in it and print the distances from start of the burst...

cnlohr commented 6 years ago

A) Detection - My brother's recommendation is to fill up only the size of the LFSR, i.e. 17 bits then just run the LFSR against incoming data. I just came to the conclusion that his solution may functionally be the same as yours - and yours is probably faster in practice.

B) There is in fact a hybrid approach I just thought of. You could have 10 checkpoints with known codes. Start running the LFSR and see if you hit one of those checkpoints. Only problem is there's no magic way to check 10 checkpoints all at once. I definitely don't want to have to complete the whole chain to determine where we are. That could be time consuming, especially because there will be multiple base stations all running.

r2x0t commented 6 years ago

Here is the code: https://gist.github.com/r2x0t/477a3704f4ab8e82741837935d848375

I had to also make a bitstream generator to simulate lighhouse signals. You can pipe GEN output directly to DETECT program to test everything.

Detect implements the checkpoints idea and also it only processes longest found match for each LFSR. Generator program can simulate light reflections and they should not change decoding result of main signal. Also you can simulate bit errors in generator. Going lower than about 1/30 change of error really disturbs the algorithm, but above that it seems to be quite good as long as you have enough bits. Still this is just a proof of concept, you can probably improve this a lot, tweak some parameters or change the detection algorithm altogether. Still I think best implementation would be in FPGA, not a CPU. Even cheap Altera MAX2 or similar would do the job. That said my detection program can probably run in realtime. I have optimized all parity calculations using lookup table and that resulted in a good speedup.