Closed dcporter closed 6 years ago
As long as we're being anecdotal: I almost never want full flattening. My usual use case is that I have an array of arrays of Ts, and would like to have an array of Ts. I'd be very surprised if I sometimes got something else. Infinite depth flattening is really weird if you're modeling the types of your data.
For a concrete use case: suppose I have a list of polygons, where a polygon is represented by a list of vertices given as [x,y]
. If I flatten this list, I am expecting to have a list of points, not a list of alternating x
coordinates and y
coordinates.
Following on from #9, which changed the default depth of
flatten
fromInfinity
to 1. I would like to revisit this.A (highly unscientific) survey of friends has suggested that:
If the shallow flattening is undesired behavior, then the performance benefit of defaulting to
depth=1
is illusory. If there's only one layer of arrays, then it's the same performance, and if there are more, then it's the wrong outcome.Are we wrong about real-life use cases strongly favoring full flattening? If not, or even if only slightly, then I find the element of surprise to outweigh the default performance profile.