Closed corvus-ch closed 5 years ago
The encoder/decoder core are both about 30 lines, I feel like supporting streaming operations would complicate them significantly.
Why are you encoding large amounts of data? The whole point of zbase32 is to be human-friendly; a wall of random characters is very much not human friendly.
I am working on a tool inspired by http://www.jabberwocky.com/software/paperkey/ with a few twists.
Since I do not have any control over the input and the Shamir algorithm will multiply that input data, streaming seems to be the way to go in order to keep the memory footprint on sane levels.
What I have in mind, looks more or less the same to what can be found in encoding/base32
.
Surely a single cryptographic key fits in RAM at once, even after Shamir, and you don't need io.Writer
.
Even on top of that, I would recommend you use zbase32 to encode short chunks, with some sort of integrity protection. This way, you can communicate to the human where exactly where they have a typo/OCR flaw when re-entering the data.
Ok, agree the memory is obviously not that much of an issue as I initially thought. Let us try that from a different angle.
Why are you encoding large amounts of data? The whole point of zbase32 is to be human-friendly; a wall of random characters is very much not human friendly.
While zbase32 is meant to be human friendly, there is nithing that prohibits other use cases. If I would like to handle huge amount of data, why should I not be able to do so. I consider zbase32 as a drop in replacement for regular base32 or any other baseX encoding. Having an io.Reader
or io.Writer
interface allows me to try different encodings by changing one single line of code.
Even on top of that, I would recommend you use zbase32 to encode short chunks, with some sort of integrity protection. This way, you can communicate to the human where exactly where they have a typo/OCR flaw when re-entering the data
Agreed, adding integrity protection is a crucial part of this. I handle this in a reader and writer on its own where the bytes are written by or from the zbase32 reader and writer. My first approach on this was, to handle that all in one place. To be honest, it was a mess of deeply nested code and I started to refactor things and I ended using the io.Reader
and io.Writer
interfaces. The result is a code structure of loosely coupled and reusable components.
As for now, I have taken the liberty to fork your code. See: https://github.com/corvus-ch/zbase32.
If you are still not interested to accept this as a PR, feel free to close this issue. Otherwise, I will happily change my code to match yours and file a PR.
I'll accept addition of Reader/Writer apis if the core code manages to remain simple & maintainable enough. That is my only concern.
As it is, you seem to have not forked my code, but copy-pasted parts of it, stripping my copyright. You realize that's illegal, right?
[…] copy-pasted parts of it, stripping my copyright […]
Actually I am unsure how much needs to be done in order to be compliant. If you can point me in the right direction, I will gladly fix that.
I'll accept addition of Reader/Writer apis if the core code manages to remain simple & maintainable enough. That is my only concern.
I will see, what I can do. Maybe you can have a look at my code and point out places where it does not meet your requirement.
This seems to just add too much code and complexity into a simple library. I'll repeat what I said earlier:
Even on top of that, I would recommend you use zbase32 to encode short chunks, with some sort of integrity protection. This way, you can communicate to the human where exactly where they have a typo/OCR flaw when re-entering the data.
I have the need for steaming using
io.Writer
andio.Reader
. I am willing to provide a pull request given this feature matches the goal for this library.