The spreadsheet timestamps are interpreted as the start point of the sampling (as in the final files), but the timestamps in GCWerks and GCCompare and the mid-point. This can be 20-40 mins different. Not a huge problem in most cases, but if using data_exclude.xlsx to remove bad data points (as I am currently for the CMN ADS) this can lead to the wrong points being removed, if you specify the flags based on the GCCompare times. Would it be better to interpret the spreadsheet times as midpoints and adjust them similar to the actual data?
The spreadsheet timestamps are interpreted as the start point of the sampling (as in the final files), but the timestamps in GCWerks and GCCompare and the mid-point. This can be 20-40 mins different. Not a huge problem in most cases, but if using data_exclude.xlsx to remove bad data points (as I am currently for the CMN ADS) this can lead to the wrong points being removed, if you specify the flags based on the GCCompare times. Would it be better to interpret the spreadsheet times as midpoints and adjust them similar to the actual data?