The reason I made this package is to handle one particularly challenging use case - the [C]Worthy mCDR Atlas - which I still haven't done. Once it's done I plan to write a blog post talking about it, and maybe add it as a usage example to this repository.
This dataset has some characteristics that make it really challenging to kerchunk/virtualize[^1]:
It's ~50TB compressed on-disk,
It has ~500,000 netCDF files(!), each with about 40 variables,
The largest variables are 3-dimensional, and require concatenation along an additional 3 dimensions, so the resulting variables are 6-dimensional,
It requires merging in lower-dimensional variables too, not just concatenation,
It has time encoding on some coordinates.
This dataset is therefore comparable to some of the largest datasets already available in Zarr (at least in terms of the number of chunks and variables, if not on-disk size), and is very similar to the pathological case described in #104
24MB per array means that even a really big store with 100 variables, each with a million chunks, still only takes up 2.4GB in memory - i.e. your xarray "virtual" dataset would be ~2.4GB to represent the entire store.
If we can virtualize this we should be able to virtualize most things 💪
To get this done requires many features to be implemented:
[x] "Inlining" of some variables for performance when reading (else all the 3 concatenation coordinates will have chunks of length 1) (#73)
[ ] Optionally using combine_by_coords to handle the 3-dimensional concatenation, which would require #18,
[ ] Possibly sub-selection into uncompressed auxiliary data which has a longer time dimension that I only need part of), which requires:
[ ] Choosing arbitrary chunks into some uncompressed data (#86)
[ ] Indexing aligned with chunks (#51)
[x] A way to get the references files onto S3, either via
[ ] #46
[x] or generating on HPC, changing the paths to the corresponding S3 URLs using #130, and moving the altered reference files to the cloud manually.
Additionally once zarr-python actually understands some kind of chunk manifest, I want to also go back and create an actual zarr store for this dataset. That will additionally require:
[^1]: In fact pretty much the only ways in which this dataset could be worse is if it had differences in encoding between netCDF files, variable-length chunks, or netCDF groups, but thankfully it has none of those 😅
The reason I made this package is to handle one particularly challenging use case - the [C]Worthy mCDR Atlas - which I still haven't done. Once it's done I plan to write a blog post talking about it, and maybe add it as a usage example to this repository.
This dataset has some characteristics that make it really challenging to kerchunk/virtualize[^1]:
This dataset is therefore comparable to some of the largest datasets already available in Zarr (at least in terms of the number of chunks and variables, if not on-disk size), and is very similar to the pathological case described in #104
If we can virtualize this we should be able to virtualize most things 💪
To get this done requires many features to be implemented:
combine_by_coords
to handle the 3-dimensional concatenation, which would require #18,Additionally once zarr-python actually understands some kind of chunk manifest, I want to also go back and create an actual zarr store for this dataset. That will additionally require:
.virtualize.to_zarr()
,[^1]: In fact pretty much the only ways in which this dataset could be worse is if it had differences in encoding between netCDF files, variable-length chunks, or netCDF groups, but thankfully it has none of those 😅