Closed vsemozhetbyt closed 7 years ago
Yikes... those performance hits... especially for Uint8Array.
There is still improvements being made in the area of TypedArrays: https://bugs.chromium.org/p/v8/issues/detail?id=5977
Hmm yeah I think we should definitely keep an eye on that before merging. We use UInt8Array
a lot.
@bmeurer @psmarshall could we keep an eye on this?
Yes, this is mostly known. We know that there's some work to be done, not only for TypedArrays.
I had to abort buffer-swap.js
benchmark after 2 runs because 1 run lasts ~ 2.5 hours, so default 60 runs would take 6 days. Maybe somebody with a free fast machine will be able to run a whole buffer-swap.js
cycle later.
A gist with the same data.
@vsemozhetbyt You can reduce the iteration count n
(and/or the number of runs if the results don't seem to vary too wildly) if a single run takes too long. I would make sure a single run executes for at least 5-10 seconds or so to give V8 plenty of time to properly optimize code and whatnot.
@mscdex If I get it right, the problem is that a nominal 'one run', as the progress bar and the compare.js
have it, includes many real runs with different config variants, from 2 up to more than 100. These real runs can last a very different amount of time with the same n
parameter, from a second up to several minutes. If we try to keep all these real runs near at least 5-10 seconds, the real runs with maximum time span get a huge time span.
For example, I've reduced n
for buffer-swap.js
from 5e7
to 1e6
. The whole set of 60 nominal runs has lasted ~ 3 hours, with one nominal run ~ 3 min. This benchmark has 66 config variants, some of them have lasted near 1 second, some of them have lasted near 15-20 seconds. To balance these cases, the n
should be changed separately and conditionally for each config variant. Unfortunately, this can be achieved either by editing the code or by many complicated CLI arguments sets and separate runs.
@vsemozhetbyt Yep, benchmarking isn't always easy :-) Explicitly passing a lower --runs
value may help as previously noted, especially for benchmarks that don't involve I/O and thus are usually less volatile.
streams:
improvement confidence p.value
streams/readable-bigread.js n=1000 -10.73 % *** 1.269154e-12
streams/readable-bigunevenread.js n=1000 -13.88 % *** 1.834009e-19
streams/readable-boundaryread.js n=2000 -13.83 % *** 4.606770e-10
streams/readable-readall.js n=5000 -9.64 % *** 3.697793e-25
streams/readable-unevenread.js n=1000 -8.92 % *** 8.211917e-14
streams/writable-manywrites.js n=2000000 -22.36 % *** 9.987796e-27
Example output with --trace-opt --trace-deopt
on streams/readable-bigread.js
This commit in the
v8/node/vee-eight-lkgr
has turned the new Ignition+TurboFan pipeline on by default. To see the performance impact, I've builtvee-eight-lkgr
before and after this commit and have started common Node.js benchmark suites with them. Both builds have these same versions:As my machine is not a fast one, I will post the results piecemeal.
UPD. All suites are complete.
TOC:
buffer-swap.js
with--set n=1e6
)foreach-bench.js
)tcp-raw-pipe.js
)