Closed Alphish closed 1 year ago
So, did a weak performance test comparing the FPS performance of the current script_execute_ext
and using array_reduce
with a reduction function that just calls min, max, etc with a large set of data.
Results in the VM were that the two ways were well within a frame or two from one another, but the reduction method can handle a much larger data set. Could use more testing for different data sizes, but on my machine I was getting around 350 +/- 10 fps real with 10k array entries when calling all the different functions as they exist now, and around the same when doing something like
//instead of script_execute_ext
return array_reduce(_array, function(_prev, _current) {
return max(_prev, _current)
})
The big difference was that you can have arrays with millions of entries and not throw a stack overflow using the second method. Requires more thorough testing, but seems promising.
EDIT: Tested some more with proper testing; it is about 6 times slower, and on further thought this wouldn't work for the mean or median functions anyway.
PR accepted and merged.
I've found that while the
script_execute_ext
implementation works well for arrays with some ten thousands arguments or so - and it's pretty efficient, to boot, it crashes with some stack overflow error when the number of arguments reaches 60k or so (less than 50k forarray_median
).So this will need to be addressed in the implementations of the array functions. I'm in the middle of exploring the options right now; I think the limit of 10k arguments should be pretty safe to avoid the worst-case stack overflow scenario, and anything larger than that will need to be handled separately.
Additionally, unit tests will need to be written for especially large arrays (100k items should strike a nice balance between covering the potential catastrophic scenario and taking too much time).