apache / arrow-adbc

Database connectivity API standard and libraries for Apache Arrow
https://arrow.apache.org/adbc/
Apache License 2.0
384 stars 97 forks source link

python/adbc_driver_manager: use PyCapsule for handles to C structs #70

Closed lidavidm closed 9 months ago

lidavidm commented 2 years ago

We should try to use the 'native' type of the C API. Apparently, this will also ease interoperability with R.

pitrou commented 2 years ago

Yes, IIRC the official R-Python interop library supports shipping PyCapsule to R (cc @paleolimbot ).

paleolimbot commented 2 years ago

Here, in case it's helpful!

https://github.com/rstudio/reticulate/blob/74d139b1772d29dce24b22a828f2972ac97abacf/src/python.cpp#L1371-L1389

pitrou commented 2 years ago

Ah, that doesn't sound helpful actually, because reticulate is only considering the case where the "context" of the capsule (an embedded opaque pointer) was created by R. Here, the case AFAIU would be to pass a Python-created PyCapsule to R. Casting a Python-managed opaque pointer to a R SEXP is probably a bad idea (though who knows :-)).

pitrou commented 2 years ago

I mean, the code reference you gave was very helpful :-)

paleolimbot commented 2 years ago

I see..that's the other direction (R to Python). I assumed that would roundtrip but apparently it doesn't ( https://github.com/rstudio/reticulate/blob/74d139b1772d29dce24b22a828f2972ac97abacf/src/python.cpp#L755-L756 ).

Casting a Python-managed opaque pointer to a R SEXP is probably a bad idea (though who knows :-)).

I think in this case that's exactly what we want! It's easy to work around, though as long as we can get an address (similar to what the Arrow R package does except in this case the R external pointer will keep a Python object reference).

pitrou commented 2 years ago

Well, py_to_r doesn't seem to handle capsules at all? What am I missing?

paleolimbot commented 2 years ago

Nothing! reticulate doesn't handle them. In this case the R package would implement py_to_r.some.qualified.python.type.schema() and return an external pointer classed as nanoarrow_schema (for example). My point was that the semantics should be exactly the same as if the transformation was automatic (at least in this case).

lidavidm commented 1 year ago

Going to punt this since I think if we start returning PyCapsule, you won't be able to work with that unless you also have PyArrow 14, and I don't want to bring up the minimum PyArrow version so much right now

pitrou commented 1 year ago

I'm not sure I understand the relationship with PyArrow here. What are the PyCapsules in this issue supposed to convey?

lidavidm commented 1 year ago

We return custom wrappers around C Data Interface objects, we want to return compliant PyCapsules (and Joris already prototyped that), but first I also want some way to make the PyCapsule also work with versions of PyArrow that can't import them (since they're opaque to Python code, right?)

pitrou commented 1 year ago

since they're opaque to Python code, right?

Indeed, they are, except using ctypes or cffi.

paleolimbot commented 1 year ago

I think the idiom to apply would be to implement __arrow_c_stream__() on the statement wrapper and have that be the one and only thing that returns a capsule?

lidavidm commented 1 year ago

But we have other methods (like get_table_schema) that need to return capsules.

paleolimbot commented 1 year ago

But we have other methods (like get_table_schema) that need to return capsules.

I think there it would return an object that implements __arrow_c_schema__()? I think the intention is that a user would do:

pyarrow.schema(conn.get_table_schema())
lidavidm commented 1 year ago

That still requires you to have PyArrow 14, which is the main thing I wanted to avoid

lidavidm commented 1 year ago

I guess that means I'd keep the existing objects we have (that expose the raw address), and just have them implement the new dunder methods in addition

lidavidm commented 1 year ago

Well, given the PyArrow CVE I think the next release of ADBC will have to require 14.0.1 as a baseline.

pitrou commented 1 year ago

There's the hotfix as well if you won't want to require 14.0.1.

lidavidm commented 1 year ago

Right - but I think it behooves another Arrow project to follow our own advice :slightly_smiling_face:

jorisvandenbossche commented 11 months ago

Started looking at this again, exploring how we can adopt the capsule protocol in ADBC python.

The low-level interface (_lib.pyx) has no dependency on pyarrow, and currently has ArrowSchemaHandle / ArrowArrayHandle / ArrowArrayStreamHandle classes that hold the C structs, and those classes are returned in the various objects.

I think in theory we could replace those current Handle classes with PyCapsules directly (and in the higher-level code, we can still extract the pointer from the PyCapsule when having to deal with a pyarrow version that doesn't support capsules directly). However, those handle objects are currently exposed in the public API, so would we be OK with just removing them? (I don't know if there are external libraries that use those directly?) We could also keep them, add add the dunder methods to them so they are importable without having to access the integer address. Which is what David mentioned above (https://github.com/apache/arrow-adbc/issues/70#issuecomment-1790708865), I assume:

I guess that means I'd keep the existing objects we have (that expose the raw address), and just have them implement the new dunder methods in addition

One advantage of keeping some generic wrapper/handle class that has the dunders vs returning pycapsules directly, is that the return values of the low-level interface can then more easily be passed to a library that expects an object with the dunder defined instead of the capsules directly (i.e. how we currently implemented support for this in pyarrow, e.g. pyarrow.array(..) checks for __arrow_c_array__ attribute on the passed object, but doesn't accept capsules directly, xref https://github.com/apache/arrow/issues/38010)


The DBAPI is much more tied to pyarrow, so I don't know if, on the short term, we want to enable getting arrow data without a dependency on pyarrow and only relying on the capsule protocol. That would require quite some changes.

Just getting an overview for myself:

Changing the methods that currently return a pyarrow Schema/Table/RecordBatchReader to return a generic object implementing the dunders seems too much of a breaking change (and would also be much less convenient for pyarrow users, e.g. requiring a users to do pyarrow.schema(conn.get_table_schema()) instead of conn.get_table_schema()). But would we want to add additional method variants of each of those that return something generic like that?

Of course, assuming you have pyarrow 14+ installed, by returning pyarrow objects which implement the dunders, we of course also automatically make the capsule protocol available in ADBC. So it's more a question to what extent we want to make it possible to use the DBAPI layer without pyarrow.


So in summary, short term I think I would do:

And then longer term we can think about whether we also want to enable using the DBAPI in some form without a runtime dependency on pyarrow.

pitrou commented 11 months ago

So in summary, short term I think I would do:

  • Add the dunder methods to the handle classes of the low-level interface, which already enables using the low-level interface without pyarrow and with the capsule protocol

  • In the places that accept data (eg ingest), generalize to accept objects that implement the dunders in addition to hardcoded support for pyarrow

+1

lidavidm commented 11 months ago

Thanks! Yup, this is my plan (I started working on it over holiday but realized it was...holiday)

lidavidm commented 9 months ago

Completed in #1346