Closed joshmoore closed 13 hours ago
That is a massive chunk and shard shape. Have you successfully used shards that large with any other Zarr libraries/tools? A (1,3,2000,2000,2000) shard with a 64-bit data type is 192GB (uncompressed)! Do you really want to write shards (files) potentially that large?
zarrs
does not currently support incrementally writing shards, but I can consider supporting that in the future for requests like this.
The "chunk shape" is the read granularity. You want to choose a chunk shape on the order of kilobytes to low megabytes for efficient visualisation in tools like neuroglancer. A (1,1,250,250,250) chunk size with a 64-bit data type is 125MB. I'd suggest something like (1,1,50,50,50).
The "shard shape" is the write granularity. I recommend choosing a shard shape suited to parallel processing/writing. For a 64-bit data type, a (1,1,500,500,500) shard would be 1GB in memory and $\lessapprox$ 1GB on disk.
That is a massive chunk and shard shape. Have you successfully used shards that large with any other Zarr libraries/tools?
No :smile: I'm stress-testing everything at the moment. Sorry for not mentioning earlier. I've got my head down and was trying to capture the output before that remote session ended. It definitely wasn't a complaint. zarrs
is still top of the leader board ;)
zarrs does not currently support incrementally writing shards, but I can consider supporting that in the future for requests like this.
:+1:
When attempting a conversion of a
(100, 3, 2000, 2000, 2000)
hypervolume , I run into a memory allocation failure: