This one, V8 from what I could understand can predict when the same pattern is happening, but when we insert some randomness, the performance of string concatening is a little bit worse.
'oldObjectAccess x 1,141,640,736 ops/sec ±0.53% (97 runs sampled)',
'newObjectAccess x 1,146,221,890 ops/sec ±0.33% (97 runs sampled)',
'random: oldObjectAccess x 9,099,441 ops/sec ±1.14% (92 runs sampled)',
'random: newObjectAccess x 23,251,465 ops/sec ±1.63% (88 runs sampled)'
perf: avoid toString when we know that the value is already a string
I did some tests and some values are fast by just leaving 'string' + value, other values are slow, so I only change the values that shows a significant improvement.
'with toString x 243,790,997 ops/sec ±4.95% (88 runs sampled)',
'without toString x 1,138,995,770 ops/sec ±0.12% (94 runs sampled)',
'bool: with toString x 222,584,175 ops/sec ±0.11% (97 runs sampled)',
'bool: without toString x 1,142,979,415 ops/sec ±0.46% (94 runs sampled)',
'number: with toString x 110,618,185 ops/sec ±0.48% (94 runs sampled)',
'number: without toString x 1,148,295,991 ops/sec ±0.05% (99 runs sampled)',
'url: with toString x 11,248,050 ops/sec ±1.76% (84 runs sampled)',
'url: without toString x 4,678,286 ops/sec ±0.95% (94 runs sampled)',
'bigint: with toString x 24,012,147 ops/sec ±1.97% (87 runs sampled)',
'bigint: without toString x 15,661,883 ops/sec ±2.36% (78 runs sampled)'
perf: avoid mutate options object by adding new properties
This one I tend to do because of hidden classes of v8, it didn't bring any significant performance improvements but I think that is good to avoid mutate objects.
This one is another optimization that brings the library to another level, instead call crypto.update every time, I only will call in the end with the huge chunk of string.
Before:
'hash({}) x 44,387 ops/sec ±0.47% (89 runs sampled)',
'fasterObjectHash({}) x 69,483 ops/sec ±0.37% (96 runs sampled)',
'hash(singleObject) x 19,579 ops/sec ±0.64% (97 runs sampled)',
'fasterObjectHash(singleObject) x 29,945 ops/sec ±0.36% (96 runs sampled)',
'hash(tinyArray) x 2,730 ops/sec ±0.35% (95 runs sampled)',
'fasterObjectHash(tinyArray) x 3,992 ops/sec ±0.61% (93 runs sampled)',
'hash(mediumArray) x 279 ops/sec ±1.12% (87 runs sampled)',
'fasterObjectHash(mediumArray) x 418 ops/sec ±0.08% (90 runs sampled)',
'hash(largeArray) x 24.11 ops/sec ±0.59% (44 runs sampled)',
'fasterObjectHash(largeArray) x 41.20 ops/sec ±0.93% (55 runs sampled)',
'hash(largeJson) x 1.00 ops/sec ±0.56% (7 runs sampled)',
'objectHash(largeJson, { unorderedObjects: true }) x 1.00 ops/sec ±0.21% (7 runs sampled)',
'fasterObjectHash(largeJson) x 2.09 ops/sec ±0.51% (10 runs sampled)',
'fasterObjectHash(largeJson, { unorderedObjects: true }) x 2.08 ops/sec ±0.61% (10 runs sampled)'
After:
'hash({}) x 43,767 ops/sec ±1.32% (95 runs sampled)',
'fasterObjectHash({}) x 117,922 ops/sec ±0.78% (93 runs sampled)',
'hash(singleObject) x 19,378 ops/sec ±1.02% (89 runs sampled)',
'fasterObjectHash(singleObject) x 61,093 ops/sec ±0.79% (97 runs sampled)',
'hash(tinyArray) x 2,689 ops/sec ±0.98% (93 runs sampled)',
'fasterObjectHash(tinyArray) x 9,589 ops/sec ±0.84% (93 runs sampled)',
'hash(mediumArray) x 281 ops/sec ±0.69% (88 runs sampled)',
'fasterObjectHash(mediumArray) x 985 ops/sec ±1.20% (93 runs sampled)',
'hash(largeArray) x 23.78 ops/sec ±0.54% (43 runs sampled)',
'fasterObjectHash(largeArray) x 74.40 ops/sec ±1.77% (64 runs sampled)',
'hash(largeJson) x 0.99 ops/sec ±0.38% (7 runs sampled)',
'objectHash(largeJson, { unorderedObjects: true }) x 0.99 ops/sec ±1.47% (7 runs sampled)',
'fasterObjectHash(largeJson) x 1.77 ops/sec ±5.14% (9 runs sampled)',
'fasterObjectHash(largeJson, { unorderedObjects: true }) x 1.89 ops/sec ±2.89% (9 runs sampled)'
With this modification, we went from about 40-70% of improvement to 200-250% in normal cases, for large objects, the improvement will vary but it shows up to 70%-80%.
Hello!
Inspired by a issue at https://github.com/nestjs/nest/issues/10844, I did a performance analysis in this library and here what I have found:
perf: faster isNativeFunction check
This one was very fun to discover, I didn't expect that the simple validation could work and it was so fast compare to the original version.
perf: faster extract object type from toString
Inspired by the previous improvement, I did't the same here.
perf: faster object access by avoid string concat
This one, V8 from what I could understand can predict when the same pattern is happening, but when we insert some randomness, the performance of string concatening is a little bit worse.
perf: avoid toString when we know that the value is already a string
I did some tests and some values are fast by just leaving
'string' + value
, other values are slow, so I only change the values that shows a significant improvement.Until here, we gain some performance improvements, like:
perf: faster circular checking by using map
By far, this is the greatest optimization, using
Map
instead usingArray
speed-up this library a lot.perf: avoid splice method to insert values
For this one, I did some benchmarks but I was not happy with the results:
So, instead of doing some array manipulation, I just create another one and run it when needed.
The improvement was very light, I could only see when I profile, first the previous version:
Then, it becomes:
perf: use valueOf instead toJSON
This one could be a breaking change because it changes the hashes.
Instead using
toJSON
and printing a ISOString, I just usevalueOf
which returns the Unix Time, which is waaay faster thantoJSON
:perf: faster lookup for valid algorithm and encoding
This one brings some improvements when you need to instantiate a lot the
object-hash
, see:perf: avoid mutate options object by adding new properties
This one I tend to do because of hidden classes of v8, it didn't bring any significant performance improvements but I think that is good to avoid mutate objects.
Also, I tend to avoid to mutate arguments too because of: https://github.com/NathanaelA/v8-Natives/blob/master/opti-killers.md#3-managing-arguments
perf: using passThrough always instead crypto
This one is another optimization that brings the library to another level, instead call
crypto.update
every time, I only will call in the end with the huge chunk of string.Before:
After:
With this modification, we went from about 40-70% of improvement to 200-250% in normal cases, for large objects, the improvement will vary but it shows up to 70%-80%.