Open shoyer opened 6 years ago
This sounds interesting. I am curious what the practical difference from dask is. Does it mean some maths are lazy by default (without any external library)?
The main practical difference is that it allows us to reliably guarantee that expressions like f(x, y)[i]
always get evaluated like f(x[i], y[i])
. Dask doesn't have this optimization yet (https://github.com/dask/dask/issues/746), so indexing operations still compute the function f()
on each block of an array. This issue provides full context from the xarray side: https://github.com/pydata/xarray/issues/1725
The typical example is spatially referenced imagery, e.g., a 2D satellite photo of the surface of the Earth with 2D latitude/longitude coordinates associated with each point. It would be very expensive to store full latitude and longitude arrays, but fortunately they can usually be computed cheaply from row and column indices.
Ideally, this logic would live outside xarray. But it's important enough to some xarray users (especially geoscience + astronomy) and we have enough related functionality (e.g., for lazy and explicit indexing) that it probably makes sense to add it.
Thanks, @shoyer
we have enough related functionality (e.g., for lazy and explicit indexing)
Agreed. Actually, it sounds very fun to code the lazy arithmetics.
Ideally, this logic would live outside xarray.
Yes, I concerned about this. We have discussed to support more kinds of array-likes (e.g. dask, sparse, cupy) in #1938, and I thought the lazy array can be (ideally) one of them.
But in practice, it should take a long time to realize the any-array-like support and it might be a good idea to natively support the lazy mathematics for now. If we are heading to any-array-like support, I think that the implementation of the lazy array should be as isolated from xarray core logic as possible so that we can move smoothly to the any-array-like support in the future.
Therefore, personally, I'd like to see this lazy math by implementing a lazy array.
The API I thought of is .to_lazy()
which converts the backend to the lazy array,
as similar to that .chunk()
converts the backend to dask array.
Therefore, personally, I'd like to see this lazy math by implementing a lazy array. The API I thought of is .to_lazy() which converts the backend to the lazy array, as similar to that .chunk() converts the backend to dask array.
This is not a bad idea, but the version of lazy arithmetic that I have been contemplating (see https://github.com/pydata/xarray/pull/2302) is not yet complete. For example, it doesn't have any way to represent a lazy aggregation.
Two thoughts:
__array_function__
protocolIndeed, I really like the look of https://github.com/dask/dask/issues/2538 and its implementation in https://github.com/dask/dask/pull/2608. It doesn't solve the indexing optimization yet but that could be pretty straightforward to add -- especially once we add a notion of explicit indexing types (basic vs outer vs vectorized) directly into dask.
Any thoughts on the current status on this?
At SciPy, I had the realization that it would be relatively straightforward to make element-wise math between xarray objects lazy. This would let us support lazy coordinate arrays, a feature that has quite a few use-cases, e.g., for both geoscience and astronomy.
The trick would be to write a lazy array class that holds an element-wise vectorized function and passes indexers on to its arguments. I haven't thought too hard about this yet for vectorized indexing, but it could be quite efficient for outer indexing. I have some prototype code but no tests yet.
The question is how to hook this into xarray operations. In particular, supposing that the inputs to a function do no hold dask arrays:
+
with separate logic fromapply_ufunc
.apply_ufunc()
lazy by default?apply_ufunc()
if you use some special flag, e.g.,apply_ufunc(..., lazy=True)
?I am leaning towards the last option for now but would welcome other opinions.