Closed avoidwork closed 6 years ago
Thanks for the PR 😄
Is it necessary use tiny-worker
? why not node child_process
API directly?
Yes it's necessary, all it does is setup the vm context and decorate a few functions so the child process mimics a web worker.
Merged since there didn't seem to be an objection.
ran a fresh clone on my i7-7700k ... i think it's working :D
@avoidwork
What do you think about now results are totally different? I mean, could be possible determinate why libraries are faster than other?
For example, looks like use prototype inheritance is a must for be faster.
from what i looked at the faster ones are striving for least work possible (mine fails in a few spots). i don't think the prototype does much in this case, unless a lot of your core methods are in an outer scope.
mine can/should be f'd up with megamorphic objects because it's not using a Map
to hold things due to cost of serialization if you want to sync with another cache. ¯_(ツ)_/¯
the benchmark itself plays a huge role in the overall ops/ms timing. if you take away that imperative* while()
and try to compress it with a functional approach (via an array of attributes to test with a loop) you cut the perf by 1/3 minimum.
@Kikobeats I just pushed up a revised benchmark; I noticed 2 mistakes yesterday...
the math was wrong, i had erroneously moved the decimal to calculate ops/sec
instead of ops/ms
CPU contention created by running tests ASAP slowed down the faster caches, so now it's in an async pipeline such that the 'next' test runs after the 'prev' resolves, while still using a Promise.all()
to collect all of the results
so results are not different, we need to update them from readme.md
I'm thinking in setup travis just for run the script in each build and ouptut the benchmark; then we can paste the result into the readme.
Results are still different, the spread is wider by about 4x from og readme; it's already been updated btw.
The newest benchmark does run them as you're like; it's a simple trick I used last year for spectron tests on an electron app; you create a promise for each test and exec the 'next' test from the resolution of the 'prev' test, using the array of promises for collection.
This minimizes lighting up all CPU cores at once and avoids* creating a resource & scheduling problem.
e.g. tiny-lru was showing around 4k ops/ms but now showing around 20k ops/ms, with a similar increase for slots 2-4. The difference trails off as you get to the middle point.
@avoidwork we have a little typo: in the bench, js-lru
appears twice times.
for the rest looks awesome 💯
ugh, ill fix right now. sorry!
Updated readme! I also ran the benchmark on my i7 again & the results are more believable 😂
should
module to deal with the npm bug