psiho / jsbench-me

jsbench.me - JavaScript performance benchmarking playground
140 stars 2 forks source link

faster/slower math #66

Open e6voe9 opened 1 year ago

e6voe9 commented 1 year ago

Tested .match() vs .includes() to define which is faster. Of course .includes() is faster, I wanted to know how big the difference.

Results are: .match() - 3.7M operations/second .includes() - 666M operations/second

Somehow it says that .match() is 99.45% slower. How? It's clear that the difference is ~18000%

Including data to test yourself: Setup JavaScript


const str = "jhvhjgv23bj 2jh3g 4hj23g 4 g 23jhg bnj23 hgb hj 23gbhj4gn 2j3gv2j3fv j2 34j423b khnb23hn 4";

case 1


str.match("34j423b")

case 2


str.includes("34j423b")

Attaching screenshot as well image

benjaminsuch commented 1 year ago

EDIT: This comment is moved to separate issue #68

Can confirm, something seems to be off. In my case (https://jsbench.me/3pkjlwzhbr/1), it says that map.has is "100% slower", although is has the same amount of operations per second as the fastest run (set.has). Meanwhile obj[target] has also the same amount of operations but is only "0.56% slower" (which makes sense since the number is rounded): jsbench-calc-error

psiho commented 1 year ago

@e6voe9 let's first take a look at your report. From my first glance, I don't see a problem. Point is that we calculate slower percent and not faster percent. We cannot calculate faster percent because there can be more than 1 alternative case, so question would be "faster than what?". When we calculate slower percent, then it is always compared to fastest case, which is only one.

Code that does this is: relative: (testCase.testRunner.result.hz / data.maxHz) - 1

So, in your case, fastest case can make 666M ops per second. Slower case can do 3.7M ops per second, which means that in any given second, slower case will do only 3.7 ops out of max possible 666 ops, which is 3.7/666 = 5.6% work of the fastest case. Yes, I could also output "5.6% of the max", but IMHO, it's exactly the same as "94.40% slower than max".

What you're suggesting is 666/3.7 = 18000%, but the meaning of that is "faster case is 18000% faster than slower case" which is also correct, but for reasons stated above - not very useful when having more than 2 cases.

So, in my opinion only real question here is how to represent results: option 1) as now, use "% slower" so we have "94,40% slower" option 2) use 100% for fastest and then "% of the fastest" for all others (in this example 5,6%) option 3) use "times slower than fastest" so in our case slower case would be "18 times slower" (not very nice when it is only a bit slower, like 5% slower)

Some other JS benchmarks use option 2, which is easily visualized with a bar chart. I could also think of some hybrid of the above, or move this to settings. But personally, I prefer option 1 as it is. So far, I never got a complaint on this. Would like to know how others feel.

EDIT: difference in calculation.... from 99,45% slower displayed currently to 94,40% slower that I calculated manually is probably due to variance and also rounding (which takes a lot when difference in ops/s is large). This difference should not be nearly as large on cases with similar speeds (where this actually matters)

e6voe9 commented 1 year ago

@psiho Thank you for this instruction. I feel like option 3 is closer to my mind. Documentation on the webpage will be definitely helpful, so everybody can easy understand that 94% slower means "6% of the fastest case speed".