Closed nkemnitz closed 6 years ago
I know why this is happening but not why it's so bad. What you're seeing is compilation time/cost.
For each size of tuple it's compiling a new version of chain
, but I think that maybe since you're calling it recursively with splatting in a loop it's recompiling the same functions over and over. I don't think it's your fault though. It's either something I need to fix in the chain
code or some kind of compiler issue (which probably has a workaround).
I'll look into this at the end of this week but you might be better off using Iterators.flatten
in the meantime.
Oh, thanks for the quick response and the flatten
suggestion! Will try that one.
Fun fact: At least for the above example, if I set d
to a higher value than 5, the weird behavior does not occur, either... it might really not be an IterTools thing...
Hmm yeah sounds like the impact of some sort of heuristic limit controlling compilation of recursive functions.
Hi all,
there is some strange exponential time and memory behavior that I don't understand: I have a tree with some additional data (
Vector{Foo}
) on every node, and eventually I want to iterate through all theFoo
objects without unnecessary allocations. At first I thought I ran into an infinite loop, because CPU got stuck at 100% and nothing else would happen, but I was able to track down the problem to the following minimal example. I'll just use a linear list instead of a wider tree here, and runlength()
instead of collecting items. That is enough to show the exponential behavior:Result for the above example (d=5, n=19):
If I run the same loop again, the times are suddenly what I would've expected in the first place:
I guess my question is: Why is that? And how can I fix it efficiently? I could change
chain
tovcat
, but I was hoping to avoid any unnecessary copies/allocations.