Fixes #14, as well as makes normalizing and adding data to a map twice as fast than before according to my basic benchmarks. 😄
The tradeoff is that normalizing now uses function recursion, which means we allocate closures on the stack every iteration and are limited by the stack size, whereas before we used a loop/recur which allowed "unlimited" depth.
I tried using fast-zip to do this in a tail recursive/"stackless" fashion, however it was about 15% to 30% slower in my basic benchmarking and less deterministic (probably due to GC).
I would like to verify this in our production uses to see if we run into any cases where we blow the stack. If it seems fine given reasonable data sizes, I may in the future provide additional functions for normalizing that aren't limited by the stack size to accommodate huge trees of data being added at once.
Fixes #14, as well as makes normalizing and adding data to a map twice as fast than before according to my basic benchmarks. 😄
The tradeoff is that normalizing now uses function recursion, which means we allocate closures on the stack every iteration and are limited by the stack size, whereas before we used a loop/recur which allowed "unlimited" depth.
I tried using fast-zip to do this in a tail recursive/"stackless" fashion, however it was about 15% to 30% slower in my basic benchmarking and less deterministic (probably due to GC).
I would like to verify this in our production uses to see if we run into any cases where we blow the stack. If it seems fine given reasonable data sizes, I may in the future provide additional functions for normalizing that aren't limited by the stack size to accommodate huge trees of data being added at once.