This adds the module src/UFS2ARCO/ufsdataset.py with the class FV3Dataset that can be used to read replay data from s3. Some other contents in this PR
docs/read_and_store_replay.ipynb: gives a quick example usage of this functionality, and a brief look at xarray/dask/zarr interplay
scripts/read_from_s3.py: gives another example, this one pulling multiple forecast cycles into a single zarr store
scripts/config-replay.yaml: the yaml file used in that example
environment.yaml: I suggest we use this environment file since it is slightly more minimal, and potentially easier to maintain than the one in src/ . With the addition nbsphinx package, we can add notebooks like the one added above to documentation.
The notebook I added only takes a shallow dive into how xarray/dask/zarr all work together. I'm happy to add more as necessary, please let me know if there's stuff that's not clear.
Also, I'm not sure how you guys feel about merging versus squash-merging, but I suggest we (eventually) squash-merge this PR. I'm a fan of micro-commits as I work :) although this can muddy up the project history
This adds the module src/UFS2ARCO/ufsdataset.py with the class FV3Dataset that can be used to read replay data from s3. Some other contents in this PR
The notebook I added only takes a shallow dive into how xarray/dask/zarr all work together. I'm happy to add more as necessary, please let me know if there's stuff that's not clear.