Closed brian-armstrong closed 7 years ago
It's arguable that the decoder should return number of decoded bits, not bytes. I'm not sure which is the lesser of two evils. Bits would be more precise, but I suspect the common case for encoded messages must be multiple of 8 bits -- probably the vast majority of messages? And if we return bits here then the user must convert to bytes afterwards.
Looks good as for me. I would stick to bits.
Bits is the more correct choice, I agree. I'm looking at the encoder now and it seems it assumes bytes, not bits. So I think I would need to change its interface as well, since it would really only make sense to have them match.
I agree that bits might be the way to go, but I can't immediately think of the best way to do it. I think if libcorrect were to do that on the decoder, it'd need to do it in the encoder too. That change is probably not insignificant. It'd also really break backwards compat, though looking at how decoder return value works now it's arguably already broken. I'm merging this for now and will think about bits later.
By the way, does your application specifically need bits? I seem to remember that what you wanted to do with Reed-Solomon used them too
Yes. And I am still looking for something that can reliably correct single bits which are evenly distributed. Unfortunately, looks like neither Reed-Solomon, nor Viterbi allow to do that.
Conv codes will work pretty well if you have evenly distributed errors, but the overall rate needs to be low. They fail really quickly when faced with burst errors too, and they generally work a lot better when you have some error estimate like with a radio demodulator. Have you tried a code like r=1/2, k=15? It's computationally expensive but pretty robust
Conv codes will work pretty well if you have evenly distributed errors, but the overall rate needs to be low.
Yes, unfortunately I have pretty consistent error rate of 27-30% bits.
Have you tried a code like r=1/2, k=15? It's computationally expensive but pretty robust
I've just checked that combination, and no - it still mostly the same result - 15% bits are still erroneous.
For my curiosity, which polynomials did you use for 1/2,15? I don't remember if I've added suggested ones yet for that combo, though the simulated annealing finder under tools/ can find some decent ones.
I think for your error rate, you would either want to interleave Conv codes and RS (liquid-dsp has an interleaver under fec/ if you want an example) or else you'd be better served by LDPC or Turbo Codes. Turbo codes inherently have an interleaver anyway, it's just part of the design. Sadly this library doesn't implement either of those yet.
I took these https://github.com/nippur72/Chip64/blob/master/VITERBI.cpp#L69
Ok, I'll try those. Thanks for suggestion.
@MageSlayer Hi, would you mind reviewing this?