Closed oleksiyk closed 10 years ago
Added Array.prototype.slice.call(value)
as in original code and also slight emulation of calculating varInt byte size for 50 fields. Results are:
$ node perf/test.js
testWithBuffer: 1204.8192771084339 op/s
testWithArray: 8.505571149102662 op/s
See the "performance" branch
I made a first pass at this, not really any noticeable performance increase.
You have to test performance with large data for bytes
field to see the real difference. Its huge.
I've updated this pull request to benchmark real encode of bytes
field with a 1.7MB image. Its based on master branch so initially it will test original (array based) performance:
$ node perf/test.js
8.97827258035554 op/s
Then merge your "performance" branch and try again:
$ git pull upstream performance
From https://github.com/nlf/protobuf.js
* branch performance -> FETCH_HEAD
Merge made by the 'recursive' strategy.
index.js | 203 +++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------
lib/varint.js | 76 +++++++++++++++-------------
package.json | 2 +-
test/basic.js | 108 ++++++++++++++++++++++++++++++++-------
test/conversion.js | 2 +-
5 files changed, 278 insertions(+), 113 deletions(-)
$ node perf/test.js
1265.8227848101264 op/s
Its 158x times faster!
Closing since we merged performance to master
This is not a pull request to be merged, its a performance test for https://github.com/nlf/protobuf.js/issues/11
This is a dead simple performance comparison for copying large binary data:
On my machine the results are:
Yes, we need to calculate the size of the buffer before copying but that shouldn't change results dramatically.