Closed HugeLetters closed 12 months ago
Pretty sure that a simple for
-loop beats any reduce
'ing too:
const chunk = (input, size) => {
const arr = [];
for (let i = 0; i < input.length; i += size) {
arr.push(input.slice(i, i + size));
}
return arr;
};
const ret = chunk(['a', 'b', 'c', 'd', 'e', 'f'], 3);
console.log(JSON.stringify(ret, null, 2))
But I have no interest in testing performance, since it's not a priority for me right now.
Pretty sure that a simple
for
-loop beats anyreduce
'ing too:const chunk = (input, size) => { const arr = []; for (let i = 0; i < input.length; i += size) { arr.push(input.slice(i, i + size)); } return arr; }; const ret = chunk(['a', 'b', 'c', 'd', 'e', 'f'], 3); console.log(JSON.stringify(ret, null, 2))
But I have no interest in testing performance, since it's not a priority for me right now.
This is definitely even faster but I don't know if the purpose of the examples is to be easily accessibly to beginner devs so I've tried to keep modifications to a minimum. I think your example should be included instead of mine but I would like to hear what other people think.
I've created simple performance test
I've created simple performance test
Nice, interesting, seems like reduce
is way faster than the for
-loop:
Even though I see potential to optimize it further, like const n = input.length;
while preallocating the proper Array
length (instead of using push, just simple arr[i] = ...
)
I've created simple performance test
Pretty sure something is wrong with this tool - there's no way lodash method is that much slower than the rest, especially considering it's vastly faster on my PC.
I get that how I sound when I say I don't believe these benchmarks but I do really find it unlikely that the previous native version which does multiple array copies for every iteration is faster than mine which only does mutations or @kungfooman which does only length/chunkSize array copies.
Pretty sure something is wrong with this tool
Lol, had the same feeling... tried to edit the benchmark, but couldn't even find a way to do that, so I forgot about it, bad UI and I'm gone :sweat_smile:
This is what I get running locally
const chunkNew = (input, size) => {
return input.reduce((arr, item, idx) => {
if (idx % size === 0) arr.push([item])
else arr[arr.length - 1].push(item)
return arr
}, [])
}
const chunkOld = (input, size) => {
return input.reduce((arr, item, idx) => {
return idx % size === 0 ? [...arr, [item]] : [...arr.slice(0, -1), [...arr.slice(-1)[0], item]]
}, [])
}
const chunkFor = (input, size) => {
const arr = []
for (let i = 0; i < input.length; i += size) arr.push(input.slice(i, i + size))
return arr
}
for (const _ of Array(100)) {
// warmup
const testArray = new Array(10000).fill(1).map(Math.random)
const nativeOld = chunkOld(testArray, 37)
const nativeNew = chunkNew(testArray, 37)
const nativeFor = chunkFor(testArray, 37)
}
const arraySizes = [10, 10_000, 1_000_000]
const chunkSizes = [2, 10, 100, 12345].reverse()
for (const arraySize of arraySizes) {
const testArray = new Array(arraySize).fill(1).map(Math.random)
for (const chunkSize of chunkSizes) {
if (chunkSize > arraySize) continue
console.time(`native old - ${arraySize} by ${chunkSize}`)
const nativeOld = chunkOld(testArray, chunkSize)
console.timeEnd(`native old - ${arraySize} by ${chunkSize}`)
console.time(`native new - ${arraySize} by ${chunkSize}`)
const nativeNew = chunkNew(testArray, chunkSize)
console.timeEnd(`native new - ${arraySize} by ${chunkSize}`)
console.time(`native for - ${arraySize} by ${chunkSize}`)
const nativeFor = chunkFor(testArray, chunkSize)
console.timeEnd(`native for - ${arraySize} by ${chunkSize}`)
}
}
And results are
native old - 10 by 10: 0.176ms
native new - 10 by 10: 0.021ms
native for - 10 by 10: 0.015ms
native old - 10 by 2: 0.02ms
native new - 10 by 2: 0.02ms
native for - 10 by 2: 0.019ms
native old - 10000 by 100: 12.512ms
native new - 10000 by 100: 0.392ms
native for - 10000 by 100: 0.038ms
native old - 10000 by 10: 44.783ms
native new - 10000 by 10: 0.438ms
native for - 10000 by 10: 0.277ms
native old - 10000 by 2: 149.928ms
native new - 10000 by 2: 0.926ms
native for - 10000 by 2: 0.344ms
It just gets stuck because old is so slow. If I remove it I get
native new - 10 by 10: 0.162ms
native for - 10 by 10: 0.015ms
native new - 10 by 2: 0.021ms
native for - 10 by 2: 0.015ms
native new - 10000 by 100: 0.387ms
native for - 10000 by 100: 0.123ms
native new - 10000 by 10: 0.654ms
native for - 10000 by 10: 0.085ms
native new - 10000 by 2: 0.952ms
native for - 10000 by 2: 0.342ms
native new - 1000000 by 12345: 147.472ms
native for - 1000000 by 12345: 4.72ms
native new - 1000000 by 100: 52.133ms
native for - 1000000 by 100: 3.244ms
native new - 1000000 by 10: 64.022ms
native for - 1000000 by 10: 16.614ms
native new - 1000000 by 2: 251.763ms
native for - 1000000 by 2: 253.353ms
Pretty sure something is wrong with this tool
Lol, had the same feeling... tried to edit the benchmark, but couldn't even find a way to do that, so I forgot about it, bad UI and I'm gone 😅
I've tried the other online benchmarking tool I found - surprisingly same weird results :|
Yup, same results for me, for + slice
is always faster than reduce
:
Maybe those benchmark pages are functional programming sect leaders and fit the test results for everything containing calls to FP methods... conspiracy 🕵️
Wrong alarm about the sect leaders, you actually have to fill your arrays m8:
Wrong alarm about the sect leaders, you actually have to fill your arrays m8:
They are filled. This line.
const testArray = new Array(arraySize).fill(1).map(Math.random)
I mean here... I removed the old version, because it can't even handle the load (tab crashes).
New result:
(2) being for/splice-version
They are filled. This line.
Yep, I realized it started unfilled with @ODudek first benchmark test
I mean here... I removed the old version, because it can't even handle the load (tab crashes).
Oh right, I think that explains the first benchmark too - I've just copied it over to a new site without looking too much into it
@HugeLetters So what's you plan, make a PR with fastest version?
@HugeLetters So what's you plan, make a PR with fastest version?
Nah, I've closed it since there's no response from people on the contributors team - if they have no interest in improving the examples then so be it.
@HugeLetters Right, so @ODudek, would you have time to review and merge, once @HugeLetters would change to fastest version?
Native chunk method provided in the example is rather slow - these are some simple improvements to it in case people copy code from this repo.
I've tried to keep the formatting consistent with what I saw but apologies if some of it is not.