Open rgommers opened 3 years ago
There was a little bit of hesitation about adding this function to a public API. For the initial I'd suggest adding in phrasing along these lines:
__dataframe__
method.DataFrame.__init__()
.from_dataframe
signature given above. And if you need extensions, come talk to us.This would be nice to revisit, before everyone makes up their own thing in a different namespace in their library. Like this:
>>> import pandas as pd
>>> pd.__version__
'1.5.0rc0'
>>> [name for name in dir(pd.api.interchange) if not name.startswith('_')]
['DataFrame', 'from_dataframe']
>>> pd.api.interchange.from_dataframe?
Signature: pd.core.interchange.from_dataframe.from_dataframe(df, allow_copy=True) -> 'pd.DataFrame'
See https://pandas.pydata.org/docs/dev/reference/api/pandas.api.interchange.from_dataframe.html
Do you want to standardize the signature, or also the namespace / location in the library?
Good point. I think those are separate questions. Signature is more important I'd say. Namespace is only important once we have a concept of a "dataframe API standard namespace" - so that can be ignored for the purpose of this issue.
Pandas code and signature:
def from_dataframe(df, allow_copy=True) -> pd.DataFrame:
Vaex code and signature:
def from_dataframe_to_vaex(df: DataFrameObject, allow_copy: bool = True) -> vaex.dataframe.DataFrame:
Modin code for function and code for method and signature:
def from_dataframe(df):
class PandasDataframe:
def from_dataframe(cls, df: "ProtocolDataframe") -> "PandasDataframe":
cuDF code and signature:
def from_dataframe(df, allow_copy=False):
I found the explanation for allow_copy
deviations in some older meeting notes:
_@maartenbreddels: if allow_copy
or allow_memory_copy
, then clearer to me. I am more in favor of allow_copy
being False
and thus being safe (performance-wise, and that I don't accidentally crash my computer)._
@jorisvandenbossche: an example would be string columns in pandas. Currently, in pandas, we cannot support arrow string columns, where two buffers. In the future, pandas will use arrow, but right now uses NumPy's object dtype. So atm, pandas would require a copy, so would always raise an exception.
Based on the above, I think we can explicitly state that allow_copy
can have any default, and that libraries must add an allow_copy
keyword.
The summary of a discussion on this yesterday was:
allow_copy
default value may varyallow_copy
are. Given that it's implemented as a pass-through to __dataframe__
, the description there should apply: https://github.com/data-apis/dataframe-api/blob/2b37b1d/protocol/dataframe_protocol.py#L401-L404. That could still be made more precise though.allow_copy
now because it basically always makes a copy. There's an internal "object store" where the data is copied to even if it's nicely laid out in memory already. That said, @vnlitvinov was fine with adding allow_copy=True
for consistency.from_dataframe
in a standard namespace, to avoid folks using (e.g.) pd.from_dataframe
and still converting away from the native representation. However, gh-85 has a better alternative (expose the native converter on the exchange df object itself).
One of the "to be decided" items at https://github.com/data-apis/dataframe-api/blob/dataframe-interchange-protocol/protocol/dataframe_protocol_summary.md#to-be-decided is:
_Should there be a standard from_dataframe constructor function? This isn't completely necessary, however it's expected that a full dataframe API standard will have such a function. The array API standard also has such a function, namely from_dlpack. Adding at least a recommendation on syntax for this function would make sense, e.g., fromdataframe(df, stream=None). Discussion at https://github.com/data-apis/dataframe-api/issues/29#issuecomment-685903651 is relevant.
In the announcement blog post draft I tentatively answered that with "yes", and added an example. The question is what the desired signature should be. The Pandas prototype currently has the most basic signature one can think of:
The above just takes any dataframe supporting the protocol, and turns the whole things in the "library-native" dataframe. Now of course, it's possible to add functionality to it, to extract only a subset of the data. Most obviously, named columns:
Other things we may or may not want to support:
My personal feeling is:
col_indices=None
__dataframe__
first, then inspect some metadata, and only then decide what chunks to get.Thoughts?