Open jbusecke opened 2 years ago
I was also unable to find the license of this dataset. I assume that it has some derived license from each of the used model datasets? Maybe @lgloege can help here
We did put this at BCO DMO, per requirement of NSF funding: https://www.bco-dmo.org/dataset/840334 There are some more references there, in case useful.
I don't know any more specifically about licenses, but I concur with your assumption . I hope @lgoege can reply there.
I have looked into this a bit more, but I have one aspect that I am struggling with: Each url points to a tar file that then contains multiple netcdf files which need to be merged in xarray. This does brake the assumption that there is a 1:1 mapping between urls and files, has anybody solved this previously? @pangeo-forge/dev-team ?
@martindurant - do you know if it's possible for fsspec to index into a .tar.gz
file the way it can with a .zip
file? That is the key technical question. If so, we can use the same approach described in https://github.com/pangeo-forge/staged-recipes/issues/90#issuecomment-932815548 to point at the individual files.
If not, we will not be able to ship this recipe without some more serious refactoring to pangeo forge recipes.
Offsets within a gzip stream are not possible. There are no block markers and sequences can even start mid-byte. I had some vague ideas about brute force options to find viable offsets, but nothing has come of them. Much better than tar.gz would be a tar of gzipped files (which would be a static version of zip), but no one does this.
We can already index into tar and zip, and have plans to index into block-compressed files like blosc and zstd (even bzip2!) but never gzip.
Martin, thank for the quick reply! That makes sense.
Just brainstorming workarounds here... @lgloege - is there any chance you could publish a new version of this dataset using .zip
files instead of .tar.gz
?
@lgloege tells me he will work on this. he'll let us know when there's a new posting
Dataset Name
Large ensemble pCO2 testbed by @lgloege
Dataset URL
https://figshare.com/collections/Large_ensemble_pCO2_testbed/4568555
Description
This is a collection of randomly selected ensemble members from 4 large ensemble projects:
Each ensemble member was interpolated from its native grid to a 1x1 degree lat/lon grid. The variables are monthly over the 1982-2017 time frame and sampled as the SOCATv5 data product. Historical atmospheric CO2 is used up to 2005 with RCP8.5 after 2005.
The intention of this dataset is to evaluate ocean pCO2 gap-filling techniques.
License
Unknown
Data Format
NetCDF
Data Format (other)
No response
Access protocol
HTTP(S)
Source File Organization
The data is organized on different levels:
<model><member_id>.tar.gz
exampleThese variables are already concatenated in time
Example URLs
I actually have some trouble getting these from figshare. I was wondering if anyone here has had experience with pulling files from a collection/dataset in figshare? Id be happy to understand the figshare API and parse http links, but maybe there is something more clever to do with these archive/doi repos like figshare/zenodo?
Authorization
No; data are fully public
Transformation / Processing
This is pretty straightforward.
Id suggest to have one recipe per model (in a recipe dict), that simply combines variables by merging them.
There should probably be some rechunking, but I think I need some input from the actual users (cc @hatlenheimdalthea @galenmckinley) what is the best chunking structure for the use cases (e.g. are the gap filling models trained on single time step maps or location timeseries).
Target Format
Zarr
Comments
No response