Closed bakkot closed 7 years ago
If we turn it into a stack of zippers, it is not going to be faster, but it will not exhaust the call stack. Any idea on how this can be faster (than creating d( = depth) objects)?
Turn into an ImmutableList and expose an iterator on that. Alternatively, don't even expose an iterator; you can do for (A a : concatList.toList());
instead.
Well, an ImmutableList
is n
objects, which is actually more than d
objects. Let alone the huge overhead of ImmutableList
. Also, to convert a tree to a list needs a stack (either in heap or just using call stack) of size d
, which is equivalent to what you pay in the current iterator implementation.
All time complexity is linear. The cost is mostly GC related. Then the structure of linked list, in fact, is really bad in terms of GC (for example, the mark-and-sweep is completely serialized, preventing GC from running in parallel).
Also don't forget that iterating an iterator can early terminate but converting it to a copy of the list cannot.
A for
loop in Java by default uses an iterator behind the scene. Since it is part of the Iterable<T>
interface, you cannot use for loop without exposing the iterator. You can implement Iterable.forEach which Java directly uses, then again, inside that, you will need a stack of size d
.
This PR. Should we close this issue?
Edit: I was wrong.
Eg, creating a list of 30 items by appending them one at a time creates a binary tree of depth 30, and iterating through it thus requires creating 30 nested iterators, which is apparently slow.