When using zarr remote datasets, it is possible to define a compression scheme in the .zarray file.
Currently, this feature is unused and all requested buckets are decompressed before sending them to the client. We should hand through the compressed files if possible saving time de-compressing and on the transmission. For that it is necessary to write new data request code paths that don't decompress the read buckets.
This would be a great addition. In a first iteration, I would only apply this optimization to datasets, where the stored chunk size matches the output chunk size. Otherwise we could create too much server load.
Detailed Description
Follow-up for #6144.
When using zarr remote datasets, it is possible to define a compression scheme in the
.zarray
file.Currently, this feature is unused and all requested buckets are decompressed before sending them to the client. We should hand through the compressed files if possible saving time de-compressing and on the transmission. For that it is necessary to write new data request code paths that don't decompress the read buckets.