Open emilyhcliu opened 7 months ago
@emilyhcliu this is a quirk of the GSI converters... The current converter writes everything to one file, but JEDI will write it to 2 separate files. Does it work if we just symlink the _cov
file to the satbias
file?
My preference is to combine the information in
gdas.t18z.atms_npp.satbias.nc4
gdas.t18z.atms_npp.satbias_cov.nc4
gdas.t18z.atms_npp.tlapse.txt
into a single netcdf file. It's much easier to keep track of one file than three.
What's the history behind the three file separation in JEDI?
There are separate read and write routines in UFO for the satbias and satbias_cov files, even though the formats are very similar.
I will note that, as part of the generalization of VarBC for aircraft, that these file formats, and the YAMLs for obs bias, will change "soon", so we probably shouldn't put too much effort into engineering a solution for these files as they are now.
Do we have these files staged somewhere that we can manually copy for use? Such as /work2/noaa/da/eliu/UFO_eval/data/gsi_geovals_l127/nofgat_aug2021/20231009/bc/*2021080100*
atms_n20 with bias correction seems to work in fv3jedi_var.x. I used the same satbias.nc
file as input file
in the obs bias and obs bias covariance sections of the input yaml. As @CoryMartin-NOAA notes we probably should hit pause on tinkering with radiance bias correction i/o given pending changes.
(My previous remark to you, @CoryMartin-NOAA, about strange increments was due to goes-16/17 amv & metop-a/b scatwnd. One or more of these produced unreasonable uv wind increments).
@RussTreadon-NOAA that is probably the linear obs operator
issue that @emilyhcliu and I discovered earlier in the week.
Thanks for the reminder, @CoryMartin-NOAA .
Additional fv3jedi_var.x runs with different observation types present yield reasonable uv increments when processing ascatw_ascat_metop-a and ascatw_ascat_metop-b. Unreasonable uv increments occur when satwind_goes-16 and satwind_goes-17 are processed.
@RussTreadon-NOAA do your yamls have this linear obs operator
section as shown in this PR? https://github.com/NOAA-EMC/GDASApp/pull/724/files Weird, if so, as @emilyhcliu found reasonable increments
Pretty sure I have Emily's change. It was merged into feature/gdas-validation
and I did a git pull
in my working copy. git status
does not show any local modifications to satwind_goes-16.yaml
or satwind_goes-17.yaml
. Let me back up to prepatmiodaobs and run through atmanlrun.
@RussTreadon-NOAA @CoryMartin-NOAA I will repeat the satwind test again, and then add the satwind + scatwind together.
Also, aftering manually adding the satbias and satbias_cov and tlapse files for ATMS. The end-to-end ATMS ran to completion. I will investigate the details.
Reran prepatmiodaobs, atmanlinit, and atmanlrun. The only obs types processed by fv3jedi_var.x were satwind_goes-16 and satwind_goes-17. The initial CostJo
look OK
0: CostJo : Nonlinear Jo(satwind_goes-16) = 7861.85, nobs = 329392, Jo/n = 0.0238678, err = 11.3996
0: CostJo : Nonlinear Jo(satwind_goes-17) = 10214.1, nobs = 429322, Jo/n = 0.0237912, err = 11.5809
The increment looks unrealistic
0: Increment print | number of fields = 8 | cube sphere face size: C768
0: eastward_wind | Min:-6.371890e+38 Max:+2.030937e-01 RMS:+1.687255e+35
0: northward_wind | Min:-6.371890e+38 Max:+1.663539e-01 RMS:+1.687255e+35
can winds not move to the north and west at several orders of magnitude faster than the speed of light?????? :-)
sure, this is jedi. anything is possible in a galaxy far, far away
On Thu, Nov 16, 2023 at 5:03 PM Cory Martin @.***> wrote:
can winds not move to the north and west at several orders of magnitude faster than the speed of light?????? :-)
— Reply to this email directly, view it on GitHub https://github.com/NOAA-EMC/JEDI-T2O/issues/98#issuecomment-1815385431, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGNN635RK35WNSZ3Z45ZYQ3YE2EQ7AVCNFSM6AAAAAA7NUMHIWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJVGM4DKNBTGE . You are receiving this because you were mentioned.Message ID: @.***>
satwind_goes-17.yaml
contains
obs linear operator:
name: VertInterp
obs
and linear
are in the wrong order. It should read
linear obs operator:
name: VertInterp
satwind_goes-16.yaml
already had linear obs operator:
. Made the above change to a working copy of satwind_goes-17.yaml
.
Rerun atmanlinit and atmanlrun with both goes-16 and goes-17 satwind processed. Observation stats before solver look good (as they did before)
0: Jo Observations Errors:
0: Diagonal observation error covariance
0: satwind_goes-16 nobs= 329392 Min=7.6, Max=14, RMS=11.3996
0:
0: Diagonal observation error covariance
0: satwind_goes-17 nobs= 429322 Min=7.6, Max=14, RMS=11.5809
0:
0: End Jo Observations Errors
0: CostJo : Nonlinear Jo(satwind_goes-16) = 7861.85, nobs = 329392, Jo/n = 0.0238678, err = 11.3996
0: CostJo : Nonlinear Jo(satwind_goes-17) = 10214.1, nobs = 429322, Jo/n = 0.0237912, err = 11.5809
0: CostJo : Nonlinear Jo = 18075.9
Now the increments also look reasonable (single iteration with identity B)
0: Increment print | number of fields = 8 | cube sphere face size: C768
0: eastward_wind | Min:-2.016947e-01 Max:+1.931298e-01 RMS:+4.262270e-04
0: northward_wind | Min:-1.652080e-01 Max:+2.280027e-01 RMS:+4.151106e-04
Interesting tidbit. The final Increment print
table includes non-zero increments for cloud_liquid_ice
and cloud_liquid_water
0: Increment print | number of fields = 8 | cube sphere face size: C768
0: eastward_wind | Min:-2.016947e-01 Max:+1.931298e-01 RMS:+4.262270e-04
0: northward_wind | Min:-1.652080e-01 Max:+2.280027e-01 RMS:+4.151106e-04
0: air_temperature | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
0: surface_pressure | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
0: specific_humidity | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
0: cloud_liquid_ice | Min:+0.000000e+00 Max:+1.618770e-20 RMS:+1.293217e-23
0: cloud_liquid_water | Min:+0.000000e+00 Max:+1.474788e-19 RMS:+2.167418e-22
0: ozone_mass_mixing_ratio | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
The cloud increments are extremely small. From where do these non-zero increment values originate: variable transform, change in numerical precision, ....?
@emilyhcliu , I found it necessary to change fieldOfViewNumber
in parm/ioda/bufr2ioda/bufr2ioda_atms.yaml
from type: float
to type: int
. Without this change, fv3jedi_var.x
aborted with
6: Exception: source_column: 0
6: source_filename: /work2/noaa/da/rtreadon/gdas-validation/global-workflow/sorc/gdas.cd/ioda/src/engines/ioda/inclu\
de/ioda/Variables/Variable.h
6: source_function: Variable_Implementation ioda::detail::Variable_Base<Variable_Implementation>::read(gsl::span<Dat\
aType>, const ioda::Selection &, const ioda::Selection &) const [with DataType = int; Marshaller = ioda::detail::Object_Accessor\
_Regular<int, int>; TypeWrapper = ioda::Types::GetType_Wrapper<int, 0>; Variable_Implementation = ioda::Variable]
6: source_line: 539
6:
6: Exception: oops::Variational<FV3JEDI, UFO and IODA observations> terminating...
After changing fieldOfViewNumber
, sensorScanPosition
in the ioda format atms dump file, fv3jedi_var.x
ran to completion.
@CoryMartin-NOAA and @RussTreadon-NOAA When running the 2021080100
gdasatmanlinit
job, the processing looks for the following satbias files from the previous cycle in 20210731/18/atmos/analysis/atmos: gdas.t18z.atms_npp.satbias.nc4 (stores bias coefficientsbias_coefficients
) gdas.t18z.atms_npp.satbias_cov.nc4 (stores bkg errors for bias coefficientsbias_coeff_errors
) gdas.t18z.atms_npp.tlapse.txtThe input satbias files we used in the UFO evaluation (2021080100) are: atms_npp_tlapmean_2021073118.txt atms_npp_satbias_2021073118.nc4
We do not have atms_npp_satbias_cov_2021073118.nc4. But, we have both
bias_coefficients
andbias_coeff_errors
(values in satbias_pc) in atms_npp_satbias_2021073118.nc4Do we want
bias_coefficients
andbias_coeff_errors
in the same file or separate files?Our radiance YAML is configured to have
bias_coefficients
andbias_coeff_errors
in separate files.