Closed lbesnard closed 2 years ago
@lbesnard why are the old files failing the CF/GHRSST checks? Can we fix the files ourselves?
It's better to let Edward do it since:
He is the source of the data, so when he's back from leave I'll get him to tackle this task
all NetCDF files were pushed back to the incoming folder fixing this issue
from @ocehugo 's email:
The issue:
For the record: SRS GHRSST data went through a major reprocess 2+ years ago. All products were pushed back to the generic timestep harvester using a new schema and with new geoserver layers.
However four layers didn't follow the same path; @ocehugo found an issue with one of them. These four layers, used in production are created by the non generic
SRS_SST_GRIDDED
harvester (now removed from the Github harvester repo).We also have equivalent layers as well created by the
GENERIC_TIMESTEP
harvester. The respective names are:The deprecated SRS SST harvester has been removed a long time ago from
10-aws
. See the first and latest available data in this layer for example:On the other hand, the generic timestep harvester is more up to date matching the latest data available on THREDDS, but only for the years 2015 and above:
I tried pushing the files not in the new generic schema from S3 back into the SRS SST
INCOMING_DIR
un-successfully because the old files don't pass the CF/GHRSST checker. So I guess this is why we never finish the transition of these layers.What to do next?
@atkinsn FYI