Closed lidavidm closed 9 months ago
Yes, IIRC the official R-Python interop library supports shipping PyCapsule to R (cc @paleolimbot ).
Here, in case it's helpful!
Ah, that doesn't sound helpful actually, because reticulate is only considering the case where the "context" of the capsule (an embedded opaque pointer) was created by R. Here, the case AFAIU would be to pass a Python-created PyCapsule to R. Casting a Python-managed opaque pointer to a R SEXP is probably a bad idea (though who knows :-)).
I mean, the code reference you gave was very helpful :-)
I see..that's the other direction (R to Python). I assumed that would roundtrip but apparently it doesn't ( https://github.com/rstudio/reticulate/blob/74d139b1772d29dce24b22a828f2972ac97abacf/src/python.cpp#L755-L756 ).
Casting a Python-managed opaque pointer to a R SEXP is probably a bad idea (though who knows :-)).
I think in this case that's exactly what we want! It's easy to work around, though as long as we can get an address (similar to what the Arrow R package does except in this case the R external pointer will keep a Python object reference).
Well, py_to_r
doesn't seem to handle capsules at all? What am I missing?
Nothing! reticulate doesn't handle them. In this case the R package would implement py_to_r.some.qualified.python.type.schema()
and return an external pointer classed as nanoarrow_schema
(for example). My point was that the semantics should be exactly the same as if the transformation was automatic (at least in this case).
Going to punt this since I think if we start returning PyCapsule, you won't be able to work with that unless you also have PyArrow 14, and I don't want to bring up the minimum PyArrow version so much right now
I'm not sure I understand the relationship with PyArrow here. What are the PyCapsules in this issue supposed to convey?
We return custom wrappers around C Data Interface objects, we want to return compliant PyCapsules (and Joris already prototyped that), but first I also want some way to make the PyCapsule also work with versions of PyArrow that can't import them (since they're opaque to Python code, right?)
since they're opaque to Python code, right?
Indeed, they are, except using ctypes
or cffi
.
I think the idiom to apply would be to implement __arrow_c_stream__()
on the statement wrapper and have that be the one and only thing that returns a capsule?
But we have other methods (like get_table_schema) that need to return capsules.
But we have other methods (like get_table_schema) that need to return capsules.
I think there it would return an object that implements __arrow_c_schema__()
? I think the intention is that a user would do:
pyarrow.schema(conn.get_table_schema())
That still requires you to have PyArrow 14, which is the main thing I wanted to avoid
I guess that means I'd keep the existing objects we have (that expose the raw address), and just have them implement the new dunder methods in addition
Well, given the PyArrow CVE I think the next release of ADBC will have to require 14.0.1 as a baseline.
Right - but I think it behooves another Arrow project to follow our own advice :slightly_smiling_face:
Started looking at this again, exploring how we can adopt the capsule protocol in ADBC python.
The low-level interface (_lib.pyx
) has no dependency on pyarrow, and currently has ArrowSchemaHandle
/ ArrowArrayHandle
/ ArrowArrayStreamHandle
classes that hold the C structs, and those classes are returned in the various objects.
I think in theory we could replace those current Handle classes with PyCapsules directly (and in the higher-level code, we can still extract the pointer from the PyCapsule when having to deal with a pyarrow version that doesn't support capsules directly). However, those handle objects are currently exposed in the public API, so would we be OK with just removing them? (I don't know if there are external libraries that use those directly?) We could also keep them, add add the dunder methods to them so they are importable without having to access the integer address. Which is what David mentioned above (https://github.com/apache/arrow-adbc/issues/70#issuecomment-1790708865), I assume:
I guess that means I'd keep the existing objects we have (that expose the raw address), and just have them implement the new dunder methods in addition
One advantage of keeping some generic wrapper/handle class that has the dunders vs returning pycapsules directly, is that the return values of the low-level interface can then more easily be passed to a library that expects an object with the dunder defined instead of the capsules directly (i.e. how we currently implemented support for this in pyarrow, e.g. pyarrow.array(..)
checks for __arrow_c_array__
attribute on the passed object, but doesn't accept capsules directly, xref https://github.com/apache/arrow/issues/38010)
The DBAPI is much more tied to pyarrow, so I don't know if, on the short term, we want to enable getting arrow data without a dependency on pyarrow
and only relying on the capsule protocol. That would require quite some changes.
Just getting an overview for myself:
Connection.adbc_get_info
and adbc_get_table_types
-> returns info as a dict or list, but under the hood consumes the stream with pyarrow and convert to pylist -> this doesn't expose pyarrow data, so on the short term this could continue to require pyarrow as a runtime dependencyConnection.adbc_get_objects
returns a pyarrow.RecordBatchReaderConnection.adbc_get_table_schema
returns a pyarrow.SchemaCursor._bind
/ _prepare_execute
/ execute
-> currently accepts pyarrow RecordBatch/Table as parameters
, but this can be expanded to any object that has the array or array stream protocolCursor.execute
/ adbc_read_partition
sets a _result
object to _RowIterator
of a AdbcRecordBatchReader
. This reader subclasses pyarrow.RecordBatchReader
(to ensure adbc error messages are properly propagated)Cursor.adbc_ingest
accepts pyarrow RecordBatch, Table or RecordBatchReader as data
, this can also be generalized to any object that supports the protocolCursor.adbc_execute_schema
/ adbc_prepare
returns a pyarrow.SchemaCursor.fetchallarrow
/ fetch_arrow_table
return a pyarrow.TableCursor.fetch_record_batch
returns a pyarrow.RecordBatchReaderCursor.fetchone
/ fetchmany
/ fetchall
use the _RowIterator
, which uses the pyarrow RecordBatchReader to iterate over the data -> this only uses pyarrow under the hood for converting the data to Python tuples, and so on the short term can continue to do that with a pyarrow runtime dependency.Changing the methods that currently return a pyarrow Schema/Table/RecordBatchReader to return a generic object implementing the dunders seems too much of a breaking change (and would also be much less convenient for pyarrow users, e.g. requiring a users to do pyarrow.schema(conn.get_table_schema())
instead of conn.get_table_schema()
).
But would we want to add additional method variants of each of those that return something generic like that?
Of course, assuming you have pyarrow 14+ installed, by returning pyarrow objects which implement the dunders, we of course also automatically make the capsule protocol available in ADBC. So it's more a question to what extent we want to make it possible to use the DBAPI layer without pyarrow.
So in summary, short term I think I would do:
And then longer term we can think about whether we also want to enable using the DBAPI in some form without a runtime dependency on pyarrow.
So in summary, short term I think I would do:
Add the dunder methods to the handle classes of the low-level interface, which already enables using the low-level interface without pyarrow and with the capsule protocol
In the places that accept data (eg ingest), generalize to accept objects that implement the dunders in addition to hardcoded support for pyarrow
+1
Thanks! Yup, this is my plan (I started working on it over holiday but realized it was...holiday)
Completed in #1346
We should try to use the 'native' type of the C API. Apparently, this will also ease interoperability with R.