Open TomAugspurger opened 2 years ago
if it's feasible for the backend
NOT when that chunk is compressed with something that doesn't have clear internal blocks (e.g., gzip). Zarr does not support streaming of any sort, it only knows which blocks you want, so you need to be able to cleanly subdivide the whole thing, which is easy for uncompressed buffers, and possible for block-compressed buffers (e.g., Zstd).
I wonder if in your example it is as fast to get the last point of the array?
cf https://github.com/fsspec/kerchunk/issues/134 , which is a similar concept ( @d70-t )
Currently, translating HDF5 to Zarr will result in a Zarr store with identical chunks as the source. If the source isn't chunked, this will cause worse performance when you slice a subset of the original data, since fsspec will make the full range request.
Here's a Kerchunked file
Timing small reads
Compared with the non-kerchunked version
Having the flexibility to make smaller requests by splitting large ranges into separate chunks would be helpful, if it's feasible for the backend (which it should be for these large, contiguous buffers from HDF5).