This is just to keep record of enhancements regarding the handling of station data for big datasets. As use case it is proposed that cf-python can handle a DSG dataset stored in contiguous ragged array format with +100_000 stations and more than 1 billion observations (see https://github.com/zequihg50/ghcnd-dsg).
An in-memory cache of the start and end dates registered for each station is trivial to obtain in this format, assuming that for each station the time series is sorted in disk. This allows to avoid several reading from the file for stations that actually do not contain data when subspacing in time, the slowest kind of access.
This is just to keep record of enhancements regarding the handling of station data for big datasets. As use case it is proposed that cf-python can handle a DSG dataset stored in contiguous ragged array format with +100_000 stations and more than 1 billion observations (see https://github.com/zequihg50/ghcnd-dsg).
An in-memory cache of the start and end dates registered for each station is trivial to obtain in this format, assuming that for each station the time series is sorted in disk. This allows to avoid several reading from the file for stations that actually do not contain data when subspacing in time, the slowest kind of access.