aodn / aodn-portal

AODN Open Geospatial Portal
https://portal.aodn.org.au/
GNU General Public License v3.0
21 stars 13 forks source link

Failed test: Time filtering for aggreagatable collections uses "greater/less or equal" #1350

Closed aodntester closed 10 years ago

aodntester commented 10 years ago

Tested on IMOS - ACORN - Rottnest Shelf HF ocean radar site (Western Australia, Australia) - QC'd sea water velocity layer. Select the date 2014/03/13 and the times 00:30 UTC and 02:29 UTC. Should get 3 files corresponding to 00:30, 1:30 and 2:29 UTC. Only get two files

T2694: Time filtering for aggreagatable collections uses "greater/less or equal" https://aodn.testrail.com/index.php?/tests/view/2694

jonescc commented 10 years ago

Problem occurring due to a mismatch between the precision of temporal extent data harvested by the acorn_hourly_avg_qc talend harvester (seconds precision from the global attributes) and the precision of temporal extent data returned by ncWMS (millisecond precision). In the example above the end time returned by ncWMS and selected in the Portal is 02:29:99.999, whereas the end time stored in the database is 02:30:00 resulting in this end time not being selected. I'll check with Guilly whether the time in milliseconds can be harvested by the talend harvester, otherwise we may need to add a 1 millisecond tolerance value to the cql query.

ggalibert commented 10 years ago

From the NetCDF files using ncdump I can retrieve :

ggalibert@5-nsp-mel:/mnt/opendap/1/IMOS/opendap/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13$ ncdump -t -v TIME IMOS_ACORN_V_20140313T003000Z_ROT_FV01_1-hour-avg.nc | grep 'TIME = "'
 TIME = "2014-03-13 00:30" ;
ggalibert@5-nsp-mel:/mnt/opendap/1/IMOS/opendap/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13$ ncdump -t -v TIME IMOS_ACORN_V_20140313T013000Z_ROT_FV01_1-hour-avg.nc | grep 'TIME = "'
 TIME = "2014-03-13 01:30" ;
ggalibert@5-nsp-mel:/mnt/opendap/1/IMOS/opendap/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13$ ncdump -t -v TIME IMOS_ACORN_V_20140313T023000Z_ROT_FV01_1-hour-avg.nc | grep 'TIME = "'
 TIME = "2014-03-13 02:30" ;

Godiva (so I assume ncWMS) goes : screenshot from 2014-08-21 10 08 56 with the third time being 2:29:59.999.

The 123 portal goes : screenshot from 2014-08-21 10 13 53 Which is consistent with Godiva.

The date in the database is : screenshot from 2014-08-21 10 27 45 which is consistent with the value stored in the NetCDF file.

So I think the problem doesn't stem from the Talend harvester which is doing its job just fine but from ncWMS. I had already pointed out this problem in the past.

So that the major problem is that GoGoDuck / AODAAC aggregation request is based on the portal time values which are not consistent (and actually not correct) with the actual time values in the files.

A related issue is this one : https://github.com/aodn/aodn-portal/issues/1211

If you look at the last comments and compare it to the name of the fix applied, it doesn't look like what we desired has been done...

Could we fix ncWMS so that correct values are provided via WMS?

Opendap seems to work fine and provides enough precision to recover the actual value :

If you go to http://thredds.aodn.org.au/thredds/dodsC/IMOS/ACORN/gridded_1h-avg-current-map_QC/ROT/2014/03/13/IMOS_ACORN_V_20140313T023000Z_ROT_FV01_1-hour-avg.nc.ascii?TIME[0:1:0] and use the TIME value (days since 01-01-1950) to create a date in Matlab, you would get

>> datestr(23447.104166666664 + datenum('01-01-1950 00:00:00'), 'dd-mmm-yyyy HH:MM:SS.FFF')

ans =

13-Mar-2014 02:30:00.000

Exactly what we want (so OPEnDAP is not buggy).

To get the same result as ncWMS in Matlab, I would have to get rid of 4 digits from the original double precision

>> datestr(23447.10416666 + datenum('01-01-1950 00:00:00'), 'dd-mmm-yyyy HH:MM:SS.FFF')

ans =

13-Mar-2014 02:29:59.999
jonescc commented 10 years ago

OK, So I've tracked this down to the netcdf java library ucar.nc2.time.CalendarDate add method where the following calculation is done:

return new CalendarDate(cal, dateTime.plus( (long) (value * 86400 * 1000) ));

The cast to long truncates the value. Would be better to round to the nearest millisecond, truncating doesn't make sense. I'll raise an issue with unidata and prepare a pull request and look at doing the same locally until the fix is applied.

dnahodil commented 10 years ago

There is a fix for this now which will go into a THREDDS build that we do. It will go through deployment and testing next iteration, probably.

Hopefully it will get taken-in to the main THREDDS code, too.

jonescc commented 10 years ago

Changes applied to core thredds repository last night - available in 4.3.23/4.5.3. Refer https://github.com/Unidata/thredds/issues/73

dnahodil commented 10 years ago

Nice one, @jonescc!

dnahodil commented 10 years ago

The fixed version of ncWMS has now been deployed and this bug is fixed in prod.