Closed twegener-embertec closed 4 years ago
v1.1 added some new checks, which invariably take more time to perform. Specifically, robust cross-realm date and regex checks (via is-date-object
and is-regex
) are likely the culprit.
v2 will have even more robust cross-realm checks, so it matches node core.
I'm very interested in improving performance, but only where it won't sacrifice correctness. Benchmarks would be helpful.
Here's a simple benchmark that tests deep equal checks on an object 4 levels deep with 3 keys at each level:
#!/usr/bin/env node
const equal110 = require('deep-equal-110');
const equal101 = require('deep-equal-101');
const ldIsEqual = require('lodash/isEqual');
const Benchmark = require('benchmark');
const suite = new Benchmark.Suite();
function makeDeepObj(width, depth, value) {
if (depth < 1) {
return value;
}
const d = {};
for (let m = 0; m < width; m += 1) {
d[m] = makeDeepObj(width, depth - 1, value);
}
return d;
}
const data = makeDeepObj(3, 4, 'cat');
const data2 = makeDeepObj(3, 4, 'cat');
suite
// Add tests.
.add('node-deep-equal 1.0.1', function() {
equal101(data, data2);
})
.add('node-deep-equal 1.1.0', function() {
equal110(data, data2);
})
.add('lodash-is-equal', function() {
ldIsEqual(data, data2);
})
// Add listeners.
.on('cycle', function(event) {
console.log(String(event.target));
})
.on('complete', function() {
console.log('Fastest is ' + this.filter('fastest').map('name'));
})
// Run async.
.run({ 'async': true });
Results:
node-deep-equal 1.0.1 x 24,842 ops/sec ±1.42% (85 runs sampled)
node-deep-equal 1.1.0 x 1,609 ops/sec ±0.78% (74 runs sampled)
lodash-is-equal x 16,015 ops/sec ±0.52% (89 runs sampled)
Fastest is node-deep-equal 1.0.1
Changing the second 'cat' to 'catfish' for unequal object check gives:
node-deep-equal 1.0.1 x 278,097 ops/sec ±1.56% (86 runs sampled)
node-deep-equal 1.1.0 x 17,237 ops/sec ±0.92% (55 runs sampled)
lodash-is-equal x 162,234 ops/sec ±0.67% (84 runs sampled)
Fastest is node-deep-equal 1.0.1
Twiddling the width and depth parameters gives different results for node-deep-equal vs lodash isEqual, however in terms of node-deep-equal v1.0.1 vs 1.1.0, 1.1.0 is always significantly slower in my tests.
What would be most helpful is using git bisect
to figure out which commit(s) specifically impact performance.
Again, correctness is paramount - always much more important than performance - so I'll want to ensure we maintain v1.1's correctness fixes while improving performance.
Hi! I met the same issue. For me it comes from is-date-object
library. Where is it calling tryDateObject
on each object what is going through deep-equal. I'm losing there xx seconds so I had to downgrade to version 1.0.1.
// Edit In my code I'm not using dates at all. Can you provide option to disable date-check?
@sionzeecz I don't think it makes sense to add a bunch of options for each type of thing that's checked; there's a similar check for every builtin object type (with more coming in v2). Correctness, not performance, is most important.
I'm very happy to review PRs that improve performance without sacrificing correctness.
I'm going to close this - I will continue to support both the v1.0 and v1.1 minor lines for the foreseeable future; v2.0 will be out shortly. It's highly likely that v1.1 and v2+ will be slower than v1.0, because as node's assert.deepEqual
and assert.deepStrictEqual
have added features, so must we.
I will accept "performance improvements that do not sacrifice correctness" and backport them to any minor lines that I can; I will accept bugfixes to any minor line as well (but many fixes couldn't go into v1.0 without causing the same perf issues).
I was using this package because it was performing great speed. At my application i'm comparing thousands things at same time.
I will stay at 1.x. Thank you for keeping alive old versions.
Totally reasonable choice :-) this package's primary goal is to match node's assert in a portable way, however.
I've observed very nasty performance regression between v1.0.1 and v1.1.0, causing on average x2 CPU usage in one application, and with localised impact even greater in various situations.
I'll post some benchmarks when I get the chance, but posting this in the meantime to give an early heads up.