Closed dinosaure closed 5 years ago
v0.1.0
uses the same design that ke
, a fast implementation of queue. Benchmarks are not done on the compression side - mostly because the API changed a lot - but I believe that a int bigarray
is much more faster than an functional queue.
ke
has some benchmarks about that and they tell that Ke.Rke
(which is what the new decompress
uses) is, indeed, much more faster than Ke.Fke
(which is the Okasaki's queue).
Currently,
deflate
algorithm use an Oakasaki's queue and it could be more interesting to find a data-structure which concatenation is fast with same semantic of Queue.Then, in the
blocklz
algorithm, we use the stdlib's queue which is a double linked-list. From some benchmark in some other works (encore
andfaraday
), the stdlib's is not the best choice in some contexts. So we need to figure about that (context & benchmark).This issue is a starter to optimize (again) decompress. However, we already found the bottleneck - the adler-32 checksum.