Closed foobar2019 closed 5 years ago
Never mind: The high IDs actually do work - up to full 64bits actually being a rng seed for the matrix - handy if ids are stochastic. The crashes one sees when trying to use massive repair block counts are largely related to mere issues with memory buffers.
Hey sorry I'm at work and just saw this. Yeah IIRC the limit is the peeling step at the start - it has fixed size limits you can lift. -Chris
On Wed, Feb 6, 2019, 2:24 PM foobar2019 <notifications@github.com wrote:
Closed #9 https://github.com/catid/wirehair/issues/9.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/catid/wirehair/issues/9#event-2122612354, or mute the thread https://github.com/notifications/unsubscribe-auth/AAPZIZFCjWDjWdElnRntgM4HQyXnCRxhks5vK1YcgaJpZM4amN4j .
Oh right the next problem is that the seeds need to be turned above 64000 symbols, which takes a lot of time...
On Wed, Feb 6, 2019, 2:42 PM Christopher Taylor <mrcatid@gmail.com wrote:
Hey sorry I'm at work and just saw this. Yeah IIRC the limit is the peeling step at the start - it has fixed size limits you can lift. -Chris
On Wed, Feb 6, 2019, 2:24 PM foobar2019 <notifications@github.com wrote:
Closed #9 https://github.com/catid/wirehair/issues/9.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/catid/wirehair/issues/9#event-2122612354, or mute the thread https://github.com/notifications/unsubscribe-auth/AAPZIZFCjWDjWdElnRntgM4HQyXnCRxhks5vK1YcgaJpZM4amN4j .
Oh you mean the block IDs yeah that should work. Sounds like a bug?
On Wed, Feb 6, 2019, 2:48 PM Christopher Taylor <mrcatid@gmail.com wrote:
Oh right the next problem is that the seeds need to be turned above 64000 symbols, which takes a lot of time...
On Wed, Feb 6, 2019, 2:42 PM Christopher Taylor <mrcatid@gmail.com wrote:
Hey sorry I'm at work and just saw this. Yeah IIRC the limit is the peeling step at the start - it has fixed size limits you can lift. -Chris
On Wed, Feb 6, 2019, 2:24 PM foobar2019 <notifications@github.com wrote:
Closed #9 https://github.com/catid/wirehair/issues/9.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/catid/wirehair/issues/9#event-2122612354, or mute the thread https://github.com/notifications/unsubscribe-auth/AAPZIZFCjWDjWdElnRntgM4HQyXnCRxhks5vK1YcgaJpZM4amN4j .
From what I understand, wirehair is LDPC over GF(256), instead of of textbook xors in GF(2) Luby. The boatload of matrix algebra algorithms to solve block dependencies is largely moonmath to me, but seems to involve some sort of cauchy systematic bolted on Luby so as to drastically improve recoverability as well as contain combinatorial explosion of higher order blocks seen in naive LT. So more or less RaptorQ, but with a bag of tricks to make it run fast.
So i surmise this could mean that N>2^16 (with K below < 2^16) is a possibility here, just as with RaptorQ.
I want to check before I go on a misguided adventure of mass-replacing uint16_t's with uint32_t's and trying to bend the tabgens for larger Ns - is my thinking entirely misguided and there's some hard limit for N as long K is < 2^16 (for instance, the resultant compressed matrix for GE..)? Any more gotchas I should be looking for in my ill fated attempt?
Let me rub the ego in exchange: Wirehair is hands down the best performing LDPC codec and the documentation of code is top notch (even if a lot of tricks still fly over my head). The limited N is the wrinkle in here which bars deployment in high block loss scenarios as a drop-in replacement for RQ.