Open skaae opened 6 years ago
df2.index.values[0] = 'c' # changes both df1 and df2
Indexes are immutable. Changing its underlying data is going to cause all sorts of problems.
ok. I think the documentation of copy is unclear then: Make a deep copy, including a copy of the data and the indices.
hhhmmm I would expect that a copy of a dataframe to be truly deep when deep=True
Which bit is unclear? The indices are copied, they are different objects:
In [3]: df1.index is df2.index
Out[3]: False
But the underlying data are shared between indexes since they're immutable.
I am noticing that http://pandas-docs.github.io/pandas-docs-travis/dsintro.html doesn't have a section for Index
. It'd be good to add a short one stating that
Index
for various dtypes.kind of the same issue as: https://github.com/pandas-dev/pandas/issues/19505
meaning docs need a bit more
But the underlying data are shared between indexes since they're immutable.
Is there any reason the underlying data in the index is not copied? It seems that the df.values is actually copied just the indices are not?
Is there any reason the underlying data in the index is not copied?
Performance. Since indices are immutable, the underlying data can safely be shared. There's no reason to copy it. DataFrames / series are mutable, so the data need to be copied.
It seems that the df.values is actually copied just the indices are not?
And just to be clear, the index is a copy, since they are different objects. Its the underlying values (which users should not be mutating) that are not copied.
ok. I'll close this.
Apparently there is a deep='all'
, that exactly deals with this (also copying underlying index data or not). To illustrate with the original example:
In [21]: df1 = pd.DataFrame(index=['a', 'b'], columns=['foo', 'muu'])
In [22]: df2 = df1.copy(deep=True)
In [23]: df2.index.values[0] = 'c'
In [24]: df1
Out[24]:
foo muu
c NaN NaN <--- updated
b NaN NaN
In [25]: df3 = df1.copy(deep='all')
In [26]: df3.index.values[1] = 'd'
In [27]: df1
Out[27]:
foo muu
c NaN NaN
b NaN NaN <--- not updated
But, deep='all'
is completely undocumented, and as far as I can find from a quick search, also only used once in our own code base (https://github.com/pandas-dev/pandas/blob/master/pandas/_libs/reduction.pyx#L537)
I am not sure we actually want to document this? But then we should maybe just remove that ability?
yes, this was really only and never implemented (or meant to be), should be removed.
IMO, copy(deep=True)
should completely sever all connections between the original and the copied object - compare the official python docs (https://docs.python.org/3/library/copy.html):
A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
So, IMO, deep=True
should come to mean what deep='all'
does currently (and the latter can then be removed).
Re:
Indexes are immutable. Changing its underlying data is going to cause all sorts of problems.
This is not a valid argument IMO - it's up to me as a user (consenting adults and all...) what I do with my objects, including the indexes, and if I make a deep copy, it's a justified expectation (I would even argue: a built-in expectation of the word "deep") that this will not mess with the original.
Plus, if I'm already deep-copying the much larger values
of a DF, not copying the index
only saves a comparatively irrelevant amount of memory.
Indexes are immutable. Changing its underlying data is going to cause all sorts of problems.
This is not a valid argument IMO - it's up to me as a user (consenting adults and all...) what I do with my objects, including the indexes
There are other problems as well that are not related to copying that makes directly changing underlying values a bad idea. For example, the internal hashtable that is used for indexing will be no longer valid if you change the underlying values of an index (so indexing will give wrong results).
not copying the index only saves a comparatively irrelevant amount of memory.
For DataFrame that might be true (depending on its size), but not for Series.
To be clear, I am personally not necessarily against changing this (IMO this would make the behaviour more straightforward, at cost of some performance. So a trade-off, of which I am not fully sure on which side I am), only answering some of your arguments.
One additional thing. You mention the comparison to the stdlib deep copy behaviour, but note that even the deep='all'
is not comparable to that (it does copy the index, but it still does not copy python objects inside the values recursively).
One additional thing. You mention the comparison to the stdlib deep copy behaviour, but note that even the deep='all' is not comparable to that (it does copy the index, but it still does not copy python objects inside the values recursively).
Isn't that moving the goal posts? It is within the power of pandas to influence how its own indexes are handled, whereas arbitrary python objects can obviously be quite complicated.
But even then, the meaning of deep
in vanilla python follows the "complete separation" interpretation:
from copy import deepcopy
x = [0, 1]
x.append(x)
x
# [0, 1, [...]]
y = deepcopy(x)
y[2][0] = 10 # same for arbitrarily many times "[2]"
y
# [10, 1, [...]]
x
# [0, 1, [...]]
The example looks to work on master. Could use a test
In [38]: df1 = pd.DataFrame(index=['a', 'b'], columns=['foo', 'muu'])
...: df1.index.name = "foo"
...: print(df1)
...:
...: # create deep copy of df1 and change a value in the index
...: df2 = df1.copy(deep=True)
...: df2.index.name = "bar"
...: df2.index.values[0] = 'c' # changes both df1 and df2
...:
...: print(df1)
...: print(df2)
foo muu
foo
a NaN NaN
b NaN NaN
foo muu
foo
c NaN NaN
b NaN NaN
foo muu
bar
c NaN NaN
b NaN NaN
In [39]: pd.__version__
Out[39]: '1.1.0.dev0+1216.gd4d58f960'
I ran into this problem while coding today. Glad to see it was already reported. Just to chime in and agree with a couple of points:
I believe all of the few languages with which I am familiar (including python), adhere to the convention that "deep" means a complete severance of all connections between the original and the copied object. Imho it's therefore difficult to justify that deep=True
should be anything other than deep='all'
.
I'm not sure I can agree with this statement, unqualified:
Since indices are immutable, the underlying data can safely be shared
It seems to me that the *underlying data of an immutable object should also be immutable, or not shared, or, as one person commented above, considered private*. Pandas, on the other hand, officially gives the user direct read/write access to the underlying mutable data, via DataFrame.Index.values
and DataFrame.Index.array
. Either deep=True
should be the same as deep='all'
(my personal preference for reasons stated above) or perhaps DataFrame.Index.values
and DataFrame.Index.array
should return a copy (thus treating the underlying data as private). Or perhaps both of these changes should be made, in order to be consistent with both ideas: that "deep" really should mean deep, and "immutable" really should mean immutable.
That said, I understand that there may be performance reasons to find a middle ground. I would be OK using deep='all'
if all of this was completely documented, that deep=True
does not have the common expectation of "deep," and that deep='all'
is necessary to acheive that. I think it should also be documented with a clear warning that the references DataFrame.Index.values
and DataFrame.Index.array
provide dangerous mutable access to the underlying data of an immutable object (thereby allowing the user to corrupt that object).
In fact this is exactly how I discovered this bug today. It never occurred to me that DataFrame.Index.values
would allow me to directly manipulate the Index (because the documentation said Index is immutable). I wanted to change the index, but knowing that the index was immutable I wrote the code like this:
df2 = df1.copy()
values = df2.index.values
values[3] = foo
df2.index = pd.Index(values)
I was surprised to find that the index of df1 had also changed. It took me a while to realized that the index (of both dataframes) changed when I did values[3] = foo
(not, as I originally thought, when I reassigned df2.index=
).
thanks for the commentary @DanielGoldfarb haooy to have contributions to improve things
@jreback
Jeff, thanks for the response. Have the maintainers made any decisions as to which direction to go with this? With a little guidance, I would be happy to contribute code and/or documentation.
--Daniel
After df_copy = df_orig.copy(deep=True)
, I expect list((df_orig==df_copy).all(axis=1)).count(True)
to match len(df_orig)
.
For large dataframes, this is not the case. Just to be sure, is this the same issue ? Are the indexes different ?
If it is the case, this is really disturbing and I’m afraid this may have lead people to wrong results (I only found it by chance).
I also ran into this today, discovered that even if the id of the index was different on the copy, modifying the cp.index.to_numpy()
values was corrupting the original.
I am totally in line @DanielGoldfarb 's point 1:
I believe all of the few languages with which I am familiar (including python), adhere to the convention that "deep" means a complete severance of all connections between the original and the copied object. Imho it's therefore difficult to justify that deep=True should be anything other than deep='all'.
A fix for this could be composed of the following elements:
have the deep=True
behave (and be documented) as the intuition, that is, with absolutely no shared items between the copy and the original. The deep='all'
alias can stay around, but if it is not yet official maybe it should better be dropped now.
accept a new deep='values'
where only the values are deep-copied. This is therefore the same behaviour as today's deep=True. Make this the default to preserve legacy compatibility and speed.
optionally accept a new deep=index
where only the index is deep-copied. I would not really know why this would be needed, but this is just for symmetry of the API
Would this be ok for everyone ?
Encounter a similar issue, that if dataframe contains nested dataframes, copy(deep=True)
is not "deep" actually.
Code to reproduce the issue:
import pandas as pd
df1 = pd.DataFrame({"foo": [pd.DataFrame({"bar": [1]})]})
print(df1)
df2 = df1.copy(deep=True)
df_inner = df2.loc[0, "foo"]
df_inner *= 2
print(df2)
print(df1)
- I believe all of the few languages with which I am familiar (including python), adhere to the convention that "deep" means a complete severance of all connections between the original and the copied object. Imho it's therefore difficult to justify that
deep=True
should be anything other thandeep='all'
.
Echoing @DanielGoldfarb 's comment, that the argument deep=True
in the copy
method is misleading.
Code Sample, a copy-pastable example if possible
Problem description
DataFrame.copy(deep=True) is not a deep copy of the index.
In
https://github.com/pandas-dev/pandas/blob/a00154dcfe5057cb3fd86653172e74b6893e337d/pandas/core/indexes/base.py#L787
maybe deep should be set to True?
Expected Output
Output of
pd.show_versions()