Open ggalibert opened 5 years ago
Thanks for reporting this issue Guillame. will look into it and figure out what the problem is
I was able to reproduce the error you reported - need to find out what is going on. I am concerned similar issues may have propagated to other nodes as I am using the same scripts - procedures across the network. do you have evidences of something similar occurring elsewhere?
Sorry at this stage we found one file/node that caused this problem from 2007 up to 20110125T040000 and after that date we didn't look any further. There could definitely be more impacted files/nodes.
I am going through the QC scripts and steps and I can't seem to find an obvious problem in them. What I did find, however, is that the grid point where Vsd=0 for VCUR_quality_control==1, actually defines the Vsd=0. This is the line of the ascii file with the velocities generated by the proprietary software (not ACORN software ):
%% Longitude Latitude U comp V comp VectorFlag U StdDev V StdDev Covariance X Distance Y Distance Range Bearing Velocity Direction Site Contributors
%% (deg) (deg) (cm/s) (cm/s) (GridCode) Quality Quality Quality (km) (km) (km) (True) (cm/s) (True) #1 #2
139.8145805 -38.2658742 0.032 -18.262 0 1.600 0.000 0.000 30.0000 -24.0000 38.4189 128.7 18.262 179.9 1 1
The V StdDev quantity is indeed 0.000 - so I guess this may be a rounding error from their software rather than a bug.
what I am 'not' including in the DM QC (but I am including in the RT QC instead), is an error flag on the contribution of radial velocities from each station to the vector. The proprietary siftware should use at least 2 radials per site when calculating the vectors. Unfortunately this is not the case here and data quality as such is degraded (both when too few radials are used; and, more in general, when the ratio NOBS1/NOBS2 is largely unbalanced towards SITE1 or SITE2. Now, this is accounted for in the RT procedure but not -yet- on the DM version. Will correct now and if required will upload a revised version of the netcdf files. For the moment what I would suggest is to flag as QCFlag=4 data for which : a, (NOBS1 || NOBS2) ==1; b, (NOBS1/NOBS2>=10 || NOBS2/NOBS1<=10) ==0 conditions a) and b) should be strong enough to get rid of major problems.
the reason for this check on NOBS1,2 is that an unbalanced distribution of radial observations introduces biases in direction and speed, which are enhanced by the position in the grid (this is the so-called GDOSA, geometrical dilution of statistical accuracy) and are also flow-dependent (the bias is larger in the presence of horizontal current shear such as in an eddy or a boundary current). this is one of the limitation of the wide-area averaging least-squares fit approach for vector calculation
FYI: I am not expecting WERA nodes to be impacted by this issue as the procedures are slightly different
don’t know yet if we’ll have to reprocess the files to generate netcdf again. if so, however, it should be relatively quick as this correction is done on the last stage (before the netcdf creation) and we have stored all concatenated and QCd files on the IRDS
@scosoli for information:
Some BONC HF files got a VCUR_sd == 0 for VCUR_quality_control = 1.
Example: http://ci-eb-thredds-devl-data.aodn.org.au/thredds/dodsC/IMOS/ACORN/gridded_1h-avg-current-map_QC/BONC/2011/01/25/IMOS_ACORN_V_20110125T040000Z_BONC_FV01_1-hour-avg.nc
This indicates a possible problem in QC.