pierrec / node-lz4

LZ4 fast compression algorithm for NodeJS
MIT License
438 stars 98 forks source link

Compression optimizations #33

Closed kripken closed 9 years ago

kripken commented 9 years ago

First commit uses typed arrays for the hashTable, second optimizes the hash computation itself.

pierrec commented 9 years ago

Hmm. On paper it seems faster but initial tests show the following with node 4.0.0:

Without your optimization: MBP-PC:node-lz4 pierrecurto$ node benchmark/bench.js Input file: /Users/pierrecurto/sandbox/git/node-lz4/data/lorem_1mb.txt Input size: 1000205 Output size: 404073 lz4.encodeBlock native x 379 ops/sec ±1.20% (90 runs sampled) lz4.decodeBlock native x 1,859 ops/sec ±1.33% (94 runs sampled) lz4.encodeBlock JS x 131 ops/sec ±1.22% (78 runs sampled) lz4.decodeBlock JS x 320 ops/sec ±0.85% (93 runs sampled)

With your optimization: MBP-PC:node-lz4 pierrecurto$ node benchmark/bench.js Input file: /Users/pierrecurto/sandbox/git/node-lz4/data/lorem_1mb.txt Input size: 1000205 Output size: 404073 lz4.encodeBlock native x 393 ops/sec ±0.42% (93 runs sampled) lz4.decodeBlock native x 1,937 ops/sec ±0.77% (98 runs sampled) lz4.encodeBlock JS x 47.26 ops/sec ±1.23% (63 runs sampled) lz4.decodeBlock JS x 332 ops/sec ±0.69% (93 runs sampled)

Any idea on why there is such a difference?

kripken commented 9 years ago

Not sure why you're seeing something different than me, except that I'm on node-v0.10.25. I get 40.18 before my patches, 102 with the first, and 110 with the second.