data-apis / array-api

RFC document, tooling and other content related to the array API standard
https://data-apis.github.io/array-api/latest/
MIT License
218 stars 48 forks source link

Handling materialization of lazy arrays #748

Open hameerabbasi opened 9 months ago

hameerabbasi commented 9 months ago

Background

Some colleagues and me were doing some work on sparse when we stumbled onto a limitation of the current Array API Standard, and @kgryte was kind enough to point out that it might have some wider implications than just sparse, so it would be prudent to discuss it with other relevant parties within the community before settling on an API design to avoid fragmentation.

Problem Statement

There are two notable things missing from the Array API standard today, which sparse, and potentially Dask, JAX and other relevant libraries might also need.

Potential solutions

Overload the Array.device attribute and the Array.to_device method.

One option is to overload the objects returned/accepted by these to contain a device + storage object. Something like the following:

class Storage:
    @property
    def device(self) -> Device:
        ...

    @property
    def format(self) -> Format:
        ...

    def __eq__(self, other: "Storage") -> bool:
        """ Compatible if combined? """

    def __ne__(self, other: "Storage") -> bool:
        """ Incompatible if combined? """

class Array:
    @property
    def device(self) -> Storage:
        ...

    def to_device(self, device: Storage, ...) -> "Array":
        ...

To materialize an array, one could use to_device(default_device()) (possible after #689 is merged).

Advantages

As far as I can see, it's compatible with how the Array API standard works today.

Disadvantages

We're mixing the concepts of an execution context and storage format, and in particular overloading operators in a rather weird way.

Introduce an Array.format attribute and Array.to_format method.

Advantages

We can get the API right, maybe even introduce xp.can_mix_formats(...).

Disadvantages

Would need to wait till the 2024 revision of the standard at least.

Tagging potentially interested parties:

leofang commented 9 months ago

I think this topic will have to be addressed in v2024, as it's too big to be squeezed in v2023 which we're trying very hard to wrap up πŸ˜…

rgommers commented 9 months ago

A few quick comments:

hameerabbasi commented 9 months ago

I think this topic will have to be addressed in v2024, as it's too big to be squeezed in v2023 which we're trying very hard to wrap up πŸ˜…

No pressure. πŸ˜‰

Materialization via some function/method in the API that triggers compute would be the one thing that is possibly actionable. However, that is quite tricky. The page I linked above has a few things to say about it.

Thanks Ralf -- That'd be a big help indeed. Materializing an entire array as opposed to one element is something that should be a common API across libraries, IMHO, I changed the title to reflect that.

kgryte commented 8 months ago

Cross linking https://github.com/data-apis/array-api/issues/728 as it may be relevant to this discussion.

adityagoel4512 commented 5 months ago

Materializing an entire array as opposed to one element is something that should be a common API across libraries, IMHO,

Just wanted to point out that it may be common but not universal. For instance, ndonnx arrays may not have any data that can be materialized. Such arrays do have data types and shapes and enable ONNX export of Array API compatible code. ONNX models are serializable computation graphs that you can load later, and so these "data-less" arrays denote model inputs that can be supplied at an entirely different point in time (in a completely different environment).

There are some inherently eager functions like __bool__ where we just raise an exception if there is no materializable data, in line with the standard. Any proposals around "lazy" arrays collecting values should have some kind of escape hatch like this.

TomNicholas commented 4 months ago

I think that xarray ends up surfacing closely-related issues to this - see https://github.com/pydata/xarray/issues/8733#issuecomment-2249011104 for a summary of the problems.

hmaarrfk commented 4 months ago

One thing I've been using is np.copyto.

  1. I create lazy arrays that allow slicing in a lazy fashion.
  2. I copy the results to the pre-allocated.

Pre-allocated arrays make a big difference in my mind in big data applications.

rgommers commented 4 months ago

Thanks for sharing @TomNicholas. I stared at your linked xarray comment for a while, but am missing too much context to fully understand that I'm afraid.

You're touching on to_device and __array__ there, so it looks like there's a "crossing the boundary between libraries" element - in the standard that'd also involve from_dlpack and asarray. For the problem of interchange between two arbitrary libraries like xp2.asarray(an_xp1_array), then:

Dask does (A), at least when one calls np.asarray on a Dask array with not-yet-materialized values, and I think that in general that's the right thing to do when execution is actually possible. Dask is fairly inconsistent in when it allows triggering execution though, it sometimes does so and sometimes raises.

@TomNicholas are you hitting case (C) here with Xarray? And if so, is that for interchange between libraries only, or something else?

rgommers commented 4 months ago

One thing I've been using is np.copyto.

@hmaarrfk we should be adding a copy function indeed I think, xref gh-495 for that. Note that that isn't inherently eager or requires triggering compute though.

TomNicholas commented 4 months ago

Thanks for sharing @TomNicholas. I stared at your linked xarray comment for a while, but am missing too much context to fully understand that I'm afraid.

Sorry @rgommers ! I'll try to explain the context here:

In xarray we try to wrap all sorts of arrays, including multiple types of lazy arrays. Originally xarray wrapped numpy arrays, then it gained an intermediate layer of its own internal lazy indexing classes which wrap numpy arrays, then it also gained the ability to wrap dask arrays (but special-cased them).

More recently I tried to generalize this so that xarray could wrap other lazily-evaluated chunked arrays (in particular cubed arrays, which act like a drop-in replacement for dask.array).

A common problem is different semantics for computing the lazy array type. Coercing to numpy via __array__ isn't really sufficient because often there are important parameters one might need to pass to the ".compute" method (e.g. which dask scheduler to use). Xarray currently special-cases several libraries that have different semantics, and also has a framework for wrapping dask vs cubed compute calls.

Dask and Cubed are also special in that they have .chunks (and .rechunk). Computation methods also often need to be specially applied using functions which understand how to map over these chunks, e.g. dask.array.blockwise.

More recently again we've realised there's another type of array we want to wrap: chunked arrays that are not necessarily computable. This is what that issue I linked was originally about.

The comment I linked to is trying to suggest how we might separate out and distinguish between all these cases from within xarray, with the maximum amount of things "just working".

You're touching on to_device and array there, so it looks like there's a "crossing the boundary between libraries" element

Not really - I'm mostly just talking about lazy/duck array -> numpy so far.