Open leipert opened 5 years ago
Also https://v8.dev/blog/cost-of-javascript-2019#json suggests:
A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.
I tried to change the tests to 80kB where I got results around 23.5 ms for both, which seems to be the startup time of v8. (empty realm file yields same result).
How would one go and do a proper benchmark with such a small file?
I have now executed 2000 runs on an 80kb file, see #2
And I created a file ./out/empty.js
with no content
as a baseline. I have only run it against: v8-7.5.288
only and it shows the following results:
Benchmarking empty on v8-7.5.288… 48.380
Benchmarking JS literal on v8-7.5.288… 54.265
Benchmarking JSON.parse on v8-7.5.288… 51.835
So to parse and execute, it takes 21.69 milliseconds, let's us that as a baseline. JS literal takes: 27.133 milliseconds and JSON.parse: 25.918 milliseconds. The pure parsing time should then roughly be:
JS literal | JSON.parse | Speed-up | |
---|---|---|---|
V8 v7.5 | 5.443 ms | 4.228 ms | 1.28× |
So JSON.parse
is still faster than the JS literal, closer to 1.3x. So parsing 80kB (roughly one jQuery) is round 1.2 milliseconds faster. Looking at that, I would probably not recommend to use this technique to save computing time in WEB bundles for example, might be different in node.JS where one's could server bill could be smaller if they use less computing.
So it takes V8 v7.5 either 157 or 237 _milli_seconds for parsing a 8 megabyte file. This is an impressive feature, but one would save "only" 80ms vs the 8 seconds as the README suggests.
I don't think the README suggests an 8-second improvement in any way. It explicitly points out how the measurements are taken (i.e. 100 d8
invocations).
Btw, I noticed that you're only testing in V8 v7.5. If you pick only a single V8 version to test in, it should probably be the latest one (currently v7.7). Note that V8 v7.6 had significant JSON.parse
improvements.
I know it doesn't say that, but looking at the README, the first thing that jumps to the eye is the table. If I were to present data like this in another context, it would look odd as well:
Travel times from Leipzig -> Berlin (200km), benchmarked a 100 times:
Bob | Anna | Speed-Up | |
---|---|---|---|
Opel Corsa | 10000 minutes | 8000 minutes | 1.25x |
Tesla | 8000 minutes | 6000 minutes | 1.33x |
vs
Bob | Anna | Speed-Up | |
---|---|---|---|
Opel Corsa | 100 minutes | 80 minutes | 1.25x |
Tesla | 80 minutes | 60 minutes | 1.33x |
I guess, what I am saying is: I am just used that if you create a benchmark or statistic, you normalize it to the individual, you'd also never say: This group of a 100 people is 6720 years old vs this group who is 6540 years old 😉
@mathiasbynens Thank you for the great work, I learned a lot:
jsvu
, didn't know about that one.But I have a question regarding methodology. Currently timings are the aggregated results for the hundred runs, e.g.
So that means, that the stats from the README:
could also read:
So it takes V8 v7.5 either 157 or 237 _milli_seconds for parsing a 8 megabyte file. This is an impressive feature, but one would save "only" 80ms vs the 8 seconds as the README suggests.