I think I finally decrypted the strange behaviour of the rolling methods and their memory consumption.
This PR adds the benchmarking script that generated the graphs below.
The conclusion is that we should revert to xarray's rolling method, but calling the rolled function with skipna=False, allow_lazy=True. xclim.utils._rolling consumes slightly less memory, but not enough to justify a in-house algorithm.
The rolling method goes through 3 steps:
Construct a new array with an added rolling dimension
Reduce this array with the given function
Repeat, but with data.notnull() to count the number of valid values in each window.
Return an array filtered by the count of #3.
As of xarray 0.14.1, the reduce method of step 2 had an hard coded allow_lazy = False. This was fixed by pydata/xarray#3435. Other than the problem of being not lazy, this triggered huge memory consumption has shown in this graph:
Adding allow_lazy=True to the call solves most of the problem. However, by default, xarray uses the nanops (nanmean, nanmax, etc.), which are slower than their nan-oblivious counterparts. This is useful if the min_periods argument is smaller than window, but otherwise, there is no use. Passing skipna=False does help a bit and uses only slightly more memory than xclim's _rolling as seen here:
That's for sum or mean, xclim's current implementation for any other function consumes as much memory as the one in xarray and is even slighlty slower.
As seen on the graphs, simply ignoring the nan tests could save a lot of memory and even some time. For most cases nan-oblivious functions, nan is returned as soon as one value is nan. When skipna=False and min_periods==window, the counting step is thus useless. Furthermore, it simplifies the dask graph, which is always good. Maybe an option could be added to xarray's rolling for this?
Finally, all those result depend on the fact that bottleneck is not used with dask arrays in the current xarray as per issue pydata/xarray#2940 and PR pydata/xarray#3040.
I think I finally decrypted the strange behaviour of the
rolling
methods and their memory consumption.This PR adds the benchmarking script that generated the graphs below.
The conclusion is that we should revert to
xarray
's rolling method, but calling the rolled function withskipna=False, allow_lazy=True
.xclim.utils._rolling
consumes slightly less memory, but not enough to justify a in-house algorithm.The rolling method goes through 3 steps:
data.notnull()
to count the number of valid values in each window.As of
xarray
0.14.1, the reduce method of step 2 had an hard codedallow_lazy = False
. This was fixed by pydata/xarray#3435. Other than the problem of being not lazy, this triggered huge memory consumption has shown in this graph:Adding
allow_lazy=True
to the call solves most of the problem. However, by default,xarray
uses the nanops (nanmean, nanmax, etc.), which are slower than their nan-oblivious counterparts. This is useful if themin_periods
argument is smaller thanwindow
, but otherwise, there is no use. Passingskipna=False
does help a bit and uses only slightly more memory thanxclim
's_rolling
as seen here:That's for
sum
ormean
,xclim
's current implementation for any other function consumes as much memory as the one inxarray
and is even slighlty slower.As seen on the graphs, simply ignoring the
nan
tests could save a lot of memory and even some time. For most cases nan-oblivious functions,nan
is returned as soon as one value isnan
. Whenskipna=False
andmin_periods==window
, the counting step is thus useless. Furthermore, it simplifies thedask
graph, which is always good. Maybe an option could be added toxarray
's rolling for this?Finally, all those result depend on the fact that
bottleneck
is not used with dask arrays in the currentxarray
as per issue pydata/xarray#2940 and PR pydata/xarray#3040.