Open lukemanley opened 1 year ago
The assert_frame_equal
is also exceptionally slow:
>>> %timeit pd.testing.assert_frame_equal(df1, df2)
10.5 s ± 110 ms per loop
data.flags.c_contiguous
is True
, i.e. the array is row-ordered.
We can probably do something like this inside the constructor:
>>> if data.flags.c_congiguous:
... data = np.asfortranarray(data)
>>> data.flags
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
and the columns-sliced arrays will now be contiguous, e.g.
>>> data = np.asfortranarray(data)
>>> data[:, 1].flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
My understanding is that np.asfortranarray(data)
would copy in that case which would be at odds with copy=False
. If we're ok with copying regardless, I think data.copy()
also produces a contiguous array.
Yeah just tried this:
import pandas as pd
import numpy as np
mi = pd.MultiIndex.from_product([np.arange(1000), np.arange(1000)])
data = np.random.randn(len(mi), 5)
data = np.asfortranarray(data) # new line!
df1 = pd.DataFrame(data, index=mi, dtype="Float64")
df2 = pd.DataFrame(data, index=mi).astype("Float64")
assert df1.equals(df2)
%timeit df1.unstack()
%timeit df2.unstack()
with results:
170 ms ± 2.16 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
168 ms ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
We can probably do something like this inside the constructor
Are you referring to the DataFrame
constructor or the BaseMaskedArray
constructor?
My understanding is that
np.asfortranarray(data)
would copy in that case which would be at odds withcopy=False
. If we're ok with copying regardless, I thinkdata.copy()
also produces a contiguous array.
Ok, haven't checked the code in detail. These slowdowns are very large, so IMO - if copy=None
in the DataFrame constructor - we should accept copy/asfortranarray/something to avoid the slowdown, unless we find a way to do it without copy. Seems worth it to me. The docs should then be updated as well.
Are you referring to the
DataFrame
constructor or theBaseMaskedArray
constructor?
Must be in the DataFrame
constructor to transform the 2d array. right? BaseMaskedArray
only deals in 1d arrays?
if you copy the 1D values passed into the BaseMaskedArray
it would create a contiguous array:
import numpy as np
arr_2d = np.random.randn(10, 10)
arr_1d = arr_2d[:, 0]
print(arr_1d.flags.contiguous) # False
arr_1d = arr_1d.copy()
print(arr_1d.flags.contiguous) # True
if you copy the 1D values passed into the
BaseMaskedArray
it would create a contiguous array
True, but copy=False
in BaseMaskedArray.__init__
, so have to change the default to copy=None
IMO, if we want that. I'm not decided what I think about that, but at least it seems reasonable to interpret copy=None
in the DataFrame constructor as an ok to to a copy there (but need to change the docs to reflect it too).
There is another issue about this. This isn’t without trade offs since it would require a copy in the constructor which is something we’d like to avoid but could be introduced in the context of copy on write.
Could you post this there?
There is another issue about this. This isn’t without trade offs since it would require a copy in the constructor which is something we’d like to avoid but could be introduced in the context of copy on write.
Could you post this there?
Just searched, but not sure which issue you're referring to.
I agree the scope is a bit different here: in cases with extension dtypes like here, data
will always be sliced by columns, while in the numpy dtype case in #50756, the data might be stored in 2D arrays, which may benefit from c-contiguous arrays in some cases.
Not saying the above changes anything, but could mean that the treatment for extension dtypes could be different than for numpy dtypes. In any case, I think the performance costs for non-f-contiguous arrays are striking, and deserve a serious discussion of benefits vs costs of doing a potential asfortranarray
or similar on ndarrays in constructors.
In any case I think we should discuss this issue parallel with #50756
The advantage is gone though as soon as you trigger an operation that forces a copy anyway. Which happens with most methods. That’s why we should view this in the context of copy on write where we have to create a copy anyway.
Some half-baked thoughts:
1) There are a few other issues that boil down to sub-optimal state (#50756 about numpy arrays, #42357 about pyarrow arrays, an old issue about consolidating blocks). We could lump all of these together into something like an "optimize" method.
1b) Exactly what optimizing means depends on what the user intends to do. Usually "axis 0 reductions" is a good guess. One advantage of laziness is not having to guess.
2) I am really uncomfortable with copy-at-construction defaults getting more complicated.
3) This problem would be avoided (or at least made no worse than the numpy case) with 2D EAs.
Note the significant difference in performance below for "equal" dataframes:
timings:
The difference being:
df1 maintains the memory layout of the original 2D array, which results in non-contiguous 1D EA arrays. df2 ends up being contiguous because
.astype
defaults tocopy=True
and the copy results in a contiguous layout for the 1D EA arrays.Not sure what the correct fix is... Add a
PerformanceWarning
inBaseMaskedArray.__init__
whencopy=False
and the input is a non-contiguous ndarray? That might help users, but it still seems easy to run into this. In fact, scanning the ASVs, I see a handful of places where EA-backed dataframes are constructed similar to df1 above.