Closed matthiasmengel closed 4 years ago
Takes 100 minutes to write the .h5 timeseries to one large netcdf. This is still fine. So this issue has low priority.
To have our output traceable, let us bring githash
and the runid
as a global attribute to the netcdf file.
Further:
y
but tas
).tas_orig
or similar.h5
timeseries filesAttributes now added, see c4e469e5f754442. Closing this.
Until now, we only worked with subsets of the data, maximum 5. The full dataset will be 25x as large. We need to think how to efficiently postprocess the timeseries into a netcdf file. HDF5, xarray or parallel netcdf are to explore.
merge_parallel.py
andmerge_submit.sh
may be reused.