Closed dainis closed 10 years ago
First, let me say, thanks for the work here. This is great, and it appears as though we need this for 0.11.x and eventually 0.12.x. I pulled down your code to test this out and there was one alarming performance drop in the benchmark tests (between master and your branch) on OSX node v0.10.28. I wanted to make sure you remembered to node-gyp rebuild, and show you what I observed.
I started out testing on your branch after a node-gyp rebuild
% node --version
v0.10.28
% ./run_tests test/benchmark/benchmark.js
benchmark.js
msgpack.pack: 2665ms, JSON.stringify: 3756ms
ratio of msgpack.pack/JSON.stringify: 0.7095314164004259
✔ benchmark - msgpack.pack faster than JSON.stringify with array of 1m objects
msgpack.pack: 4377ms, JSON.stringify: 1126ms
ratio of msgpack.pack/JSON.stringify: 3.887211367673179
✔ benchmark - JSON.stringify faster than msgpack.pack over 1m calls
msgpack.unpack: 2046ms, JSON.parse: 633ms
ratio of JSON.parse/msgpack.unpack: 3.2322274881516586
✔ benchmark - JSON.parse faster than msgpack.unpack over 1m calls
msgpack pack: 2606 ms
msgpack unpack: 31317 ms
json pack: 3768 ms
json unpack: 3334 ms
msgpack pack: 7883 ms
msgpack unpack: 30400 ms
json pack: 4013 ms
json unpack: 3385 ms
msgpack pack: 8128 ms
msgpack unpack: 30609 ms
json pack: 4071 ms
json unpack: 3325 ms
✔ benchmark - output above is from three runs on a 1m element array of objects
msgpack pack: 4475 ms
msgpack unpack: 2193 ms
json pack: 1431 ms
json unpack: 726 ms
msgpack pack: 4448 ms
msgpack unpack: 2178 ms
json pack: 1417 ms
json unpack: 716 ms
msgpack pack: 4426 ms
msgpack unpack: 2138 ms
json pack: 1395 ms
json unpack: 714 ms
✔ benchmark - output above is from three runs of 1m individual calls
OK: 5 assertions (182351ms)
Then I moved to master and rebuilt the code
% git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
% node-gyp rebuild
<SNIP>
% ./run_tests test/benchmark/benchmark.js
benchmark.js
msgpack.pack: 2391ms, JSON.stringify: 3747ms
ratio of msgpack.pack/JSON.stringify: 0.6381104883907126
✔ benchmark - msgpack.pack faster than JSON.stringify with array of 1m objects
msgpack.pack: 4911ms, JSON.stringify: 1100ms
ratio of msgpack.pack/JSON.stringify: 4.464545454545455
✔ benchmark - JSON.stringify faster than msgpack.pack over 1m calls
msgpack.unpack: 1874ms, JSON.parse: 653ms
ratio of JSON.parse/msgpack.unpack: 2.8698315467075037
✔ benchmark - JSON.parse faster than msgpack.unpack over 1m calls
msgpack pack: 2327 ms
msgpack unpack: 18322 ms
json pack: 4014 ms
json unpack: 3344 ms
msgpack pack: 6327 ms
msgpack unpack: 17992 ms
json pack: 4164 ms
json unpack: 3439 ms
msgpack pack: 5950 ms
msgpack unpack: 18720 ms
json pack: 3761 ms
json unpack: 3404 ms
✔ benchmark - output above is from three runs on a 1m element array of objects
msgpack pack: 5022 ms
msgpack unpack: 1995 ms
json pack: 1378 ms
json unpack: 707 ms
msgpack pack: 4966 ms
msgpack unpack: 1987 ms
json pack: 1340 ms
json unpack: 703 ms
msgpack pack: 5044 ms
msgpack unpack: 2122 ms
json pack: 1376 ms
json unpack: 723 ms
✔ benchmark - output above is from three runs of 1m individual calls
OK: 5 assertions (143223ms)
The unpack of a 1m element array is nearly 2x slower with the new code. I was wondering if there was a chance you forgot to rebuild, as these numbers are much different than what you pasted above. Also worth noting, I compiled this and ran it on OSX.
Let me know if you think there is anything to be done about this performance drop. I have not read the code close enough to see what may have caused this drop, but I will at some point.
If we cannot do anything about performance here, we should either wait until the release of node 0.12.0, or perhaps figure out some nice packaging magic. Maybe we do a major version bump that only works on node 0.11.x and newer. I'm open to ideas.
Yes, there was a performance regression
Looks like nan for older versions is creating more objects(?) which is making it slower
For older versions now it will use current object creation and only for newer versions(>= 0.11.x) it will use nan
node 0.10.28 - master :
msgpack.pack: 2559ms, JSON.stringify: 4072ms
ratio of msgpack.pack/JSON.stringify: 0.6284381139489195
✔ benchmark - msgpack.pack faster than JSON.stringify with array of 1m objects
msgpack.pack: 8772ms, JSON.stringify: 1298ms
ratio of msgpack.pack/JSON.stringify: 6.75808936825886
✖ benchmark - JSON.stringify faster than msgpack.pack over 1m calls
msgpack.unpack: 2490ms, JSON.parse: 741ms
ratio of JSON.parse/msgpack.unpack: 3.360323886639676
✔ benchmark - JSON.parse faster than msgpack.unpack over 1m calls
msgpack pack: 2536 ms
msgpack unpack: 9506 ms
json pack: 4307 ms
json unpack: 2772 ms
msgpack pack: 4586 ms
msgpack unpack: 9090 ms
json pack: 4482 ms
json unpack: 2883 ms
msgpack pack: 4126 ms
msgpack unpack: 9590 ms
json pack: 4087 ms
json unpack: 2766 ms
✔ benchmark - output above is from three runs on a 1m element array of objects
msgpack pack: 8828 ms
msgpack unpack: 2632 ms
json pack: 1677 ms
json unpack: 834 ms
msgpack pack: 8948 ms
msgpack unpack: 2645 ms
json pack: 1663 ms
json unpack: 812 ms
msgpack pack: 8829 ms
msgpack unpack: 2608 ms
json pack: 1662 ms
json unpack: 813 ms
✔ benchmark - output above is from three runs of 1m individual calls
node 0.10.28 - with nan :
msgpack.pack: 2653ms, JSON.stringify: 4024ms
ratio of msgpack.pack/JSON.stringify: 0.6592942345924453
✔ benchmark - msgpack.pack faster than JSON.stringify with array of 1m objects
msgpack.pack: 7163ms, JSON.stringify: 1282ms
ratio of msgpack.pack/JSON.stringify: 5.587363494539781
✔ benchmark - JSON.stringify faster than msgpack.pack over 1m calls
msgpack.unpack: 2760ms, JSON.parse: 755ms
ratio of JSON.parse/msgpack.unpack: 3.6556291390728477
✔ benchmark - JSON.parse faster than msgpack.unpack over 1m calls
msgpack pack: 2603 ms
msgpack unpack: 9730 ms
json pack: 3917 ms
json unpack: 2722 ms
msgpack pack: 4949 ms
msgpack unpack: 9014 ms
json pack: 4159 ms
json unpack: 2707 ms
msgpack pack: 4955 ms
msgpack unpack: 9178 ms
json pack: 4196 ms
json unpack: 2712 ms
✔ benchmark - output above is from three runs on a 1m element array of objects
msgpack pack: 7000 ms
msgpack unpack: 2913 ms
json pack: 1604 ms
json unpack: 833 ms
msgpack pack: 7071 ms
msgpack unpack: 2898 ms
json pack: 1553 ms
json unpack: 812 ms
msgpack pack: 6997 ms
msgpack unpack: 2947 ms
json pack: 1611 ms
json unpack: 827 ms
✔ benchmark - output above is from three runs of 1m individual calls
node 0.11.13 - with nan:
msgpack.pack: 5085ms, JSON.stringify: 1384ms
ratio of msgpack.pack/JSON.stringify: 3.6741329479768785
✖ benchmark - msgpack.pack faster than JSON.stringify with array of 1m objects
msgpack.pack: 7700ms, JSON.stringify: 1591ms
ratio of msgpack.pack/JSON.stringify: 4.8397234443746076
✔ benchmark - JSON.stringify faster than msgpack.pack over 1m calls
msgpack.unpack: 3021ms, JSON.parse: 1108ms
ratio of JSON.parse/msgpack.unpack: 2.726534296028881
✔ benchmark - JSON.parse faster than msgpack.unpack over 1m calls
msgpack pack: 2861 ms
msgpack unpack: 4316 ms
json pack: 1096 ms
json unpack: 2363 ms
msgpack pack: 5865 ms
msgpack unpack: 9931 ms
json pack: 2050 ms
json unpack: 2503 ms
msgpack pack: 5606 ms
msgpack unpack: 10015 ms
json pack: 2115 ms
json unpack: 2561 ms
✔ benchmark - output above is from three runs on a 1m element array of objects
msgpack pack: 8318 ms
msgpack unpack: 3601 ms
json pack: 1640 ms
json unpack: 1255 ms
msgpack pack: 7834 ms
msgpack unpack: 3101 ms
json pack: 1555 ms
json unpack: 1122 ms
msgpack pack: 7600 ms
msgpack unpack: 3357 ms
json pack: 1494 ms
json unpack: 1107 ms
✔ benchmark - output above is from three runs of 1m individual calls
Any update on merging this PR?
@dainis I'm testing your PR on a custom repo, at https://github.com/fermuch/msgpack-node/
I'm using it in a big application, and in a few days I'll tell if everything is working okay -- I don't see why not, anyways. Is there a reason why this is still unmerged?
merged, version bumped, updated in npm, signed and tagged in both repos
Thanks again @dainis and everyone that looked at this.
@fermuch I'm still interested in how this does in production for you. If you have good results I will wait a few weeks for reports and toss it into production too.
NAN is creating more objects when ensuring that all returned handles are Local
s, some of them might however be unnecessary. I'll investigate.
with the new v8 version in node 0.11.x almost everything is broken, nan package allows backward/forward compatibility for node 0.8.x, 0.10.x and 0.11.x
this patch also removes usage of "fast buffers", on node 0.11.x performance is just terrible and on 0.10.x there is almost no difference
node 0.10.28 "fast buffer":
node 0.10.28 "slow buffers":
node 0.11.13 "fast buffers" :
0.11.13 "slow buffer" :