Closed rabernat closed 5 years ago
I will use this thread to discuss another issue I have encountered with the polar datasets.
I am loading a dataset like this
import intake
catalog_url = 'https://raw.githubusercontent.com/NicWayand/polar.pangeo.io-deploy/staging/deployments/polar.pangeo.io/image/catalog.yaml'
cat = intake.Catalog(catalog_url)
ds_nsidc = cat.NSIDC_0081.to_dask()
ds_nsidc
The dataset looks like this:
<xarray.Dataset>
Dimensions: (time: 1384, x: 304, y: 448)
Coordinates:
hole_mask (y, x) int8 dask.array<shape=(448, 304), chunksize=(448, 304)>
lat (x, y) float64 dask.array<shape=(304, 448), chunksize=(304, 448)>
lon (x, y) float64 dask.array<shape=(304, 448), chunksize=(304, 448)>
* time (time) datetime64[ns] 2015-01-01 2015-01-02 2015-01-03 ...
* x (x) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
xm (x) int64 dask.array<shape=(304,), chunksize=(304,)>
* y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
ym (y) int64 dask.array<shape=(448,), chunksize=(448,)>
Data variables:
area (time) float64 dask.array<shape=(1384,), chunksize=(1,)>
extent (time) float64 dask.array<shape=(1384,), chunksize=(1,)>
sic (time, y, x) float64 dask.array<shape=(1384, 448, 304), chunksize=(1, 448, 304)>
It is notable that there is zero metadata anywhere in this dataset (xarray dataset and variable attributes are all empty). Consequently it is very hard for the user to know what they are looking at. NetCDF files distributed by official data providers strive hard to be CF Compliant. When we put data in the cloud, we should strive to preserve the metadata as much as possible.
I'm curious how these datasets were produced and how we might go about recovering the metadata. Particularly important is information about the map projection.
For an example of CF-complaint dataset, you can look at this example from the Altimetry Analysis Use Case. In addition to the dataset-level metadata, there is also variable-specific metadata for each variable.
If you would open the same data with a gcsfs mapper directly, would you see the metadata attributes?
Thanks @rabernat for bringing this up. The metadata is missing and I will raise an issue to add it back in.
On a similar thread... Does Anyone have suggestions for how to get a DOI for a Zarr dataset that is updated daily in a google cloud bucket? Ideally the DOI would point to the most recent version, but I am fine freezing it for the DOI. I have used sites like Zenodo before, but would have to Tar the Zarr files first before uploading, which seems inefficient.
Ok @rabernat metadata added. Hope it is useful for your class now. If you (or anyone else) finds any issues with the metadata, please let me know.
<xarray.Dataset>
Dimensions: (fore_time: 52, init_end: 48, model: 20, x: 304, y: 448)
Coordinates:
crs object ...
* fore_time (fore_time) timedelta64[ns] 0 days 7 days 14 days 21 days ...
* init_end (init_end) datetime64[ns] 2018-01-07 2018-01-14 2018-01-21 ...
init_start (init_end) datetime64[ns] dask.array<shape=(48,), chunksize=(48,)>
lat (x, y) float64 dask.array<shape=(304, 448), chunksize=(152, 224)>
lon (x, y) float64 dask.array<shape=(304, 448), chunksize=(152, 224)>
* model (model) object 'Observed' 'awispin' 'climatology' ...
valid_end (init_end, fore_time) datetime64[ns] dask.array<shape=(48, 52), chunksize=(48, 52)>
valid_start (init_end, fore_time) datetime64[ns] dask.array<shape=(48, 52), chunksize=(48, 52)>
* x (x) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
* y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ...
Data variables:
SIP (init_end, model, fore_time, y, x) float64 dask.array<shape=(48, 20, 52, 448, 304), chunksize=(1, 1, 1, 448, 304)>
anomaly (init_end, model, fore_time, y, x) float64 dask.array<shape=(48, 20, 52, 448, 304), chunksize=(1, 1, 1, 448, 304)>
mean (init_end, model, fore_time, y, x) float64 dask.array<shape=(48, 20, 52, 448, 304), chunksize=(1, 1, 1, 448, 304)>
Attributes:
comment: Weekly mean sea ice concentration forecasted ...
contact: nicway@uw.edu
creator_email: nicway@uw.edu
creator_name: Nicholas Wayand, University of Washington
creator_url: https://atmos.uw.edu/sipn/
date_created: 2018-12-03T00:00:00
date_modified: 2018-12-04T16:02:19
geospatial_lat_max: 89.83682
geospatial_lat_min: 31.102670000000003
geospatial_lat_resolution: ~25km
geospatial_lat_units: degrees_north
geospatial_lon_max: 179.81398000000002
geospatial_lon_min: -180.00000000000003
geospatial_lon_resolution: ~25km
geospatial_lon_units: degrees_east
history: 2018-12-04T16:02:19: updated by Nicholas Wayand
institution: UW, SIPN, ARCUS
keywords: Arctic > Sea ice concentration > Prediction
product_version: 1.0
project: Sea Ice Prediction Network Phase II
references: Wayand, N.E., Bitz, C.M., and E. Blanchard-Wr...
source: Numerical model predictions and Passive micro...
summary: Dataset is updated daily with weekly sea ice ...
time_coverage_end: 2019-11-24T00:00:00
time_coverage_start: 2018-01-01T00:00:00
title: SIPN2 Sea ice Concentration Forecasts and Obs...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not seen recent activity. The issue can always be reopened at a later date.
Is polar.pangeo.io being used?
This cluster has had 3 compute instances running constantly since November, at a cost of about $300 per month. Have not heard much from @NicWayand since the initial setup. If the cluster is being used, great, carry on! If not, let's assess whether it makes sense to keep paying for this.
FWIW, I spent a few minutes cleaning up the polar cluster today. It now will idle at its intended 2 compute instances.
Any update from @NicWayand? Should we shut this cluster down? Consolidate with others?
xref https://github.com/pangeo-data/pangeo-cloud-federation/issues/215
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it had not seen recent activity. The issue can always be reopened at a later date.
It seems that polar.pangeo.io has been shut down. Is that correct? Has it been merged after all?
(Apologies if I have missed something.)
@NicWayand published a great blog post today about polar.pangeo.io! https://medium.com/pangeo/polar-deployment-of-pangeo-96865774287c
He concludes by asking other people to get involved by using the cluster and / or adding datasets:
The repo referenced here is the general pangeo one. But I think it would be best to have a dedicated forum where current and potential polar.pangeo.io users can interact. Currently the cluster is deployed from https://github.com/NicWayand/polar.pangeo.io-deploy, which doesn't have an issue tracker because it's a fork.
Would it make sense to move that repo here with the other pangeo deploy repos, and to un-fork it so it is a standalone, full-fledged repo? More generally, what sort of interaction between users and cluster admins do we want to encourage?
Somewhat related to #476.