dominictarr / bench-lru

MIT License
89 stars 11 forks source link

Refactoring to multi-process approach #15

Closed avoidwork closed 6 years ago

avoidwork commented 7 years ago

screen shot 2017-11-12 at 3 40 06 pm

Kikobeats commented 7 years ago

Thanks for the PR 😄

Is it necessary use tiny-worker? why not node child_process API directly?

avoidwork commented 7 years ago

Yes it's necessary, all it does is setup the vm context and decorate a few functions so the child process mimics a web worker.

avoidwork commented 6 years ago

Merged since there didn't seem to be an objection.

avoidwork commented 6 years ago

i7_7700k

ran a fresh clone on my i7-7700k ... i think it's working :D

Kikobeats commented 6 years ago

@avoidwork

What do you think about now results are totally different? I mean, could be possible determinate why libraries are faster than other?

For example, looks like use prototype inheritance is a must for be faster.

avoidwork commented 6 years ago

from what i looked at the faster ones are striving for least work possible (mine fails in a few spots). i don't think the prototype does much in this case, unless a lot of your core methods are in an outer scope.

mine can/should be f'd up with megamorphic objects because it's not using a Map to hold things due to cost of serialization if you want to sync with another cache. ¯_(ツ)_/¯

the benchmark itself plays a huge role in the overall ops/ms timing. if you take away that imperative* while() and try to compress it with a functional approach (via an array of attributes to test with a loop) you cut the perf by 1/3 minimum.

avoidwork commented 6 years ago

@Kikobeats I just pushed up a revised benchmark; I noticed 2 mistakes yesterday...

  1. the math was wrong, i had erroneously moved the decimal to calculate ops/sec instead of ops/ms

  2. CPU contention created by running tests ASAP slowed down the faster caches, so now it's in an async pipeline such that the 'next' test runs after the 'prev' resolves, while still using a Promise.all() to collect all of the results

Kikobeats commented 6 years ago

so results are not different, we need to update them from readme.md

I'm thinking in setup travis just for run the script in each build and ouptut the benchmark; then we can paste the result into the readme.

avoidwork commented 6 years ago

Results are still different, the spread is wider by about 4x from og readme; it's already been updated btw.

The newest benchmark does run them as you're like; it's a simple trick I used last year for spectron tests on an electron app; you create a promise for each test and exec the 'next' test from the resolution of the 'prev' test, using the array of promises for collection.

This minimizes lighting up all CPU cores at once and avoids* creating a resource & scheduling problem.

e.g. tiny-lru was showing around 4k ops/ms but now showing around 20k ops/ms, with a similar increase for slots 2-4. The difference trails off as you get to the middle point.

Kikobeats commented 6 years ago

@avoidwork we have a little typo: in the bench, js-lru appears twice times.

for the rest looks awesome 💯

avoidwork commented 6 years ago

ugh, ill fix right now. sorry!

avoidwork commented 6 years ago

Updated readme! I also ran the benchmark on my i7 again & the results are more believable 😂

better