Closed NorbertGarfield closed 2 years ago
Hi, thank you for your PR. Inside the issue there is one more challenge, that you could take: Now, that RLE is performed before the distribution on threads, we can perform RLE on a chunk of bytes that is larger than 720.000, if the outcome is not larger than 900.000 bytes. This can be incorporated, for example, by refactoring RLE into a reader which reads from the input reader until 900.000 of outcome are achieved (or just fill a buffer, if a reader is overkill). Are you interested in doing this change as well so that the issue can be closed?
Are you interested in doing this change as well so that the issue can be closed?
Yes, by all means. Should I amend the changes to this PR? Or do I need to open another one?
Great! You can add your changes to the PR.
I might be over-engineering this, but my best shot is:
I could not find the policy regarding re-bases, so I re-based my branch to the latest HEAD of the main branch. Let me know if that's fine.
I just wanted to express how grateful I am about your PR. At least in my benchmarks the compression rate (using k-means-clustering for Huffman) the compression rate is now close to the original implementation!
Solves https://github.com/torfmaster/ribzip2/issues/4
generate_block_data
rle::rle
fromblock_encoder
tostream