Open richardeschloss opened 4 years ago
That's what I got on Firefox [on Windows 10]:
range_(corecursion): 41ms - timer ended
range_(loop): 2ms - timer ended
Increasing end
also caused the code to hit InternalError: too much recursion
.
From this site:
The maximal recursion depth is limited by JavaScript engine. We can rely on it being 10000, some engines allow more, but 100000 is probably out of limit for the majority of them. There are automatic optimizations that help alleviate this (“tail calls optimizations”), but they are not yet supported everywhere and work only in simple cases.
Every time a function is called, a context (scope) is created and memory allocation needs to happen for all the variables used within the call. When finished, all the memory allocated variables are no longer referenced, so garbage starts to accumulate until the GC decides to kick in.
Now looking at the for loops, memory allocation is done only once: on the header. At every iteration, the new value is assigned to the same piece of memory, so no new allocations occur and no garbage is generated until the end of the loop, when the variables are no longer pointed at.
Let's do a quick analysis of @richardeschloss's code. There's just one simple action, splitted in 3 lines, that generates a significant overhead:
const unfold = (f, seed) => {
const next = (f, val, acc) => {
if (!f(val)) return acc
// Here, the call to f(val) returns an array, which already allocated memory.
// Destructuring here causes 2 more value allocations for currVal and nextVal
// and value copy for each
const [currVal, nextVal] = f(val);
// In JS, calling functions with objects as params, passes them by reference, but
// for primitives, it passes them by value, which triggers another copy & memory
// allocation here
acc.push(currVal)
// Then again, another copy & memory allocation here for nextVal
return next(f, nextVal, acc);
}
return next(f, seed, [])
}
Now, if instead of destructuring, we can assign the returning value of f(val)
(the array) and instead use the array on the acc.push()
and next()
, we avoid 2 extra memory allocations & compies. This might sound like it's meaningless, but it's not:
const unfold = (f, seed) => {
const next = (f, val, acc) => {
if (!f(val)) return acc
const [currVal, nextVal] = f(val);
acc.push(currVal)
return next(f, nextVal, acc);
}
return next(f, seed, [])
}
const unfoldAlt = (f, seed) => {
const next = (f, val, acc) => {
if (!f(val)) return acc
const resVal = f(val);
acc.push(resVal[0])
return next(f, resVal[1], acc);
}
return next(f, seed, [])
}
const rangeCorecursion = (start, end, alt) =>
(alt? unfoldAlt : unfold)(val => (val <= end)
? [val, val + 1]
: null
, start);
const rangeLoop = (start, end) => {
const acc = []
for (let i = start; i <= end; i++) {
acc.push(i)
}
return acc
}
const end = 5000
console.time('range_(corecursion)')
const range_test1 = rangeCorecursion(0, end)
console.timeEnd('range_(corecursion)')
console.time('range_(corecursionAlt)')
const range_test2 = rangeCorecursion(0, end, true)
console.timeEnd('range_(corecursionAlt)')
console.time('range_(loop)')
const range_test3 = rangeLoop(0, end)
console.timeEnd('range_(loop)')
Results in (executed in Chrome dev tools console):
range_(corecursion): 6.258056640625ms
range_(corecursionAlt): 2.860107421875ms
range_(loop): 0.413818359375ms
Just by avoiding that extra allocation+copy we got more than 2x perfromance increase.
Finally, since the for-loop only does allocation on the header and the recursion does on every function call, it now makes sense why there's such a meaningful difference in performance.
Thanks for the analysis! I just updated PR #7 to pass the array values by reference. Test results are in this JSFiddle.
Another good read might be this discussion on the human eye's perceived latency. I realize I'm talking about a really long list (of 5000+ elements), whereas many lists on a UI might be limited to 100. So, it's possible the performance issues I raise the concern for may rarely be encountered on a UI, but still worth thinking about.
Hi @richardeschloss @auterium, I have rewritten the intro. Please let me know what you think. Thanks!
@stevemao that looks much more reasonable. If it were up to me, I would still change the title for "why you (usually) don't need loops", but that's up to you if you want to change it or not
I like the updates and I like the "Simple English" points. I think you bring up good points.
Thank you! I did some minor tweaks again.
Hi, I just submitted a pull request #7 that shows a significant performance improvement with some simple code changes. However, even with this performance improvement, the for loop still performs much better, or at least in my environment. My environment: Chromium 73 (linux).
Consider the following test:
As soon as I bump up the "end" to anything over 5000, chromium encounters "max call size errors". Using the for loop, not only do I not encounter that error, I can bump up the end range all the way to about 135,000 and have it finish before the corecursion method finishes 5000 iterations.
Is the performance different on Safari? What do those numbers look like?