Open rgommers opened 3 years ago
I haven't been in the discussions lately but drive by commenting since I have some strong opinions about this and was one of the main people voicing concerns about the to_numpy
and to_arrow
stuff:
The main pro of doing this is:
- A potential performance gain in the dataframe conversion (TBD how significant)
Is this performance gain to just eliminate the control flow code of constructing say Arrow's containers around memory that we'd be passing around zero copy anyway? If this was implemented in C/C++ (which I imagine most Python libraries would end up doing) then I'd argue this becomes negligible anyway.
For __arrow_array__ I cannot think of issues right away.
Arrow Array objects are backed by Arrow Buffer objects which is an abstract interface that can be backed by CPU or GPU or future devices memory. This wouldn't make any guarantees about where the memory is, only what the container is and possibly give a standard API to work with against the container (though most Arrow APIs will currently throw exceptions or segfault if you try to use them with GPU memory).
My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.
Thanks for the input @kkraus14
Is this performance gain to just eliminate the control flow code of constructing say Arrow's containers around memory that we'd be passing around zero copy anyway? If this was implemented in C/C++ (which I imagine most Python libraries would end up doing) then I'd argue this becomes negligible anyway.
Yes indeed, just about control flow. And I agree it'd be a very minor gain.
Arrow Array objects are backed by Arrow Buffer objects which is an abstract interface that can be backed by CPU or GPU or future devices memory. This wouldn't make any guarantees about where the memory is, only what the container is and possibly give a standard API to work with against the container (though most Arrow APIs will currently throw exceptions or segfault if you try to use them with GPU memory).
I'm actually not quite sure how to interpret this bit. Why would these guarantees be needed (if __arrow_array__
is used by both consumer and producer, it seems like this should "just work")?
My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.
This does sound like the better option to me too - it's less complexity overall.
I'm actually not quite sure how to interpret this bit. Why would these guarantees be needed (if
__arrow_array__
is used by both consumer and producer, it seems like this should "just work")?
Because you're not guaranteed that downstream of every consumer is just using high level dataframe code / PyArrow code. Someone could have an extension written in C/C++ that assumes buffers are in CPU memory for example.
So then we still need to inspect the flag to determine whether to copy the data to the CPU and presumably call a PyArrow specific API to get a new PyArrow array backed by CPU memory. It adds a bunch of complexity for basically 0 gain.
My 2c: we should keep the interchange protocol limited to a memory layout description and focus on ensuring we can make the memory interchange zero copy and then doing our best to ensure libraries can use it as efficiently as possible.
This does sound like the better option to me too - it's less complexity overall.
I opened https://github.com/data-apis/dataframe-api/issues/279 as an alternative to this issue but to achieve the same goal. That proposal is then actually only about a memory layout, without being tied to a specific library (i.e. pyarrow in this case).
It wouldn't yet support GPU (since the Arrow PyCapsule interface doesn't support that yet), but GPU dataframe interchange objects can then simply not add those methods for now to indicate they don't support this.
This was brought up by @jorisvandenbossche: if two libraries both use the same library for in-memory data storage (e.g. buffers/columns are backed by NumPy or Arrow arrays), can we avoid iterating through each buffer on each column by directly handing over that native representation?
This is a similar question to https://github.com/data-apis/dataframe-api/blob/main/protocol/dataframe_protocol_summary.md#what-is-wrong-with-to_numpy-and-to_arrow - but it's not the same, there is one important difference. The key point of that FAQ entry is that it's consumers who should rely on NumPy/Arrow, and not producers. Having a
to_numpy()
method somewhere is at odds with that. Here is an alternative:Column
instance may define__array__
or__arrow_array__
if and only if the column itself is backed by a single NumPy or an Arrow array.DataFrame
andBuffer
instance must not define__array__
or__arrow_array__
.(1) is motivated by wanting a simple shortcut like this:
However, there are other constraints then. For
__array__
this then also implies:NaN
or a sentinel value for nulls (and this needs checking first in the code above - otherwise the consumer may still misinterpret the data)For
__arrow_array__
I cannot think of issues right away. Of course the producer should also be careful to ensure that there are no differences in behavior due to adding one of these methods. For example, if there's a dataframe with a nested dtype that is supported by Arrow but not by the protocol, calling__dataframe__()
should raise because of the unsupported dtype.The main pro of doing this is:
The main con is:
My impression is: this may be useful to do for
__arrow_array__
, I don't think it's a good idea for__array__
because the gain is fairly limited and there's too many constraints or ways to get it wrong (e.g.describe_null
must always be checked before using__array__
). If__array__
is to be added, then maybe at theBuffer
level where it plays the same role as__dlpack__
.