Open danielballan opened 3 years ago
Should we support ?slice=::mean(2)
for downsampling?
It keeps coming up.
I worry about adding much in the way of data processing because slippery slope. Things like log-scaling an image can be done in the front end or by a separate microservice. But downsampling specifically is helpful to do “close to” the data because you can save so much space and time.
My concerns are mostly practical:
In addition to downsampling over N pixels with ?slice=::mean(N)
might also support ?slice=::mean
to average a dimension down to 0-d, such as averaging an image time series over time to produce one image?
By supporting mean but not sum we ensure that we can coerce to the original dtype (with rounding, if integer). The Central Limit Theorem removes concerns about overflow.
I guess I’m leaning: “Let’s do it but mark it as experimental and reserve the right to revisit moving it into a data reduction/processing microservice, once we actually have one.”
The enhancement wouldn’t add any new query parameters, and while the syntax is a bit “clever” it is backed by a documented standard (linked in my first post above) used by the formidable IRIS–HEP group.
Summarizing a suggestion from @eliotgann addressing the question of how to handle boundary conditions:
you can always be explicit and say you want slice=(23:1023:mean(10)) or something
That is, if the user asks for a downsampling factor that does not divide evenly, we can raise an error explaining that they need to do the trimming.
That is: if you want fancy behavior you need to do a tiny bit of math to prove that you understand you are trimming data. We won’t silently trim it for you for fear that you may not realize we are doing it.
~Another good argument for using “trim”: the result of ::2
and ::mean(2)
will have the same shape.~ Not true:
In [1]: import numpy
In [2]: a = numpy.arange(10)
In [3]: a[::3]
Out[3]: array([0, 3, 6, 9])
In [4]: import toolz
In [5]: toolz.partition(3, a)
Out[5]: <zip at 0x7f27d213e780>
In [6]: list(toolz.partition(3, a))
Out[6]: [(0, 1, 2), (3, 4, 5), (6, 7, 8)]
In [7]: map(numpy.mean, toolz.partition(3, a))
Out[7]: <map at 0x7f27d111c220>
In [8]: list(map(numpy.mean, toolz.partition(3, a)))
Out[8]: [1.0, 4.0, 7.0]
I like the idea of dashing off ::mean(17)
in a URL and having that “just work”, so I’m inclined to silently trim, not force the user to provide a commensurate slice.
If we feel confident we'll stick with mean
(not sum
or any others) then it doesn't matter much if the last bin has a different size. It may have a higher variance, but it will have a correct value. And if you want even statistics, you can slice the range to be commensurate with the downsampling factor.
Summarizing the discussion above:
?slice=
parameter, accept mean
to average over and entire dimension and mean(INTEGER)
to downsample. Inspired by UHI, we accept mean
in the stride position of a slice.For example, given an image time series --- i.e. 3D array with dimensions (time, x, y):
?slice=::mean
--- Average over time to get a 2D array?slice=42,::mean(2),::mean(2)
--- Access the 42nd time step, downsampled by a factor of 2
https://uhi.readthedocs.io/en/latest/indexing.html