Closed climbfuji closed 4 months ago
@SamuelTrahanNOAA
No. We definitely should not have incorrect information in the units field.
The ideal solution was discussed in a CCPP meeting, but it seems none of us copied that information to github. Sorry about that.
We would rather have flash rate relative to a configurable base rate. Changes would be these:
This should result in multiplying the value by 1 inside the lightning diagnostic code, which should not change the results. It has the advantages of being consistent, intuitive, non-answer-changing, and invisible to the post-processing.
No. We definitely should not have incorrect information in the units field.
The ideal solution was discussed in a CCPP meeting, but it seems none of us copied that information to github. Sorry about that.
We would rather have flash rate relative to a configurable base rate. Changes would be these:
- Units are changed to "unitless."
- Long name explains it is relative to a base rate.
- New variable contains the base rate in flashes per second.
- Lightning code is updated to scale to the base rate. It does this by dividing the requested base rate by flashes per five minutes.
- Model requests a base rate of flashes per five minutes.
This should result in multiplying the value by 1 inside the lightning diagnostic code, which should not change the results. It has the advantages of being consistent, intuitive, non-answer-changing, and invisible to the post-processing.
I'll leave that up to y'all, as long as it doesn't delay these PRs that need to go in as soon as the commit queue allows (and rt.sh finishes).
No. We definitely should not have incorrect information in the units field.
The ideal solution was discussed in a CCPP meeting, but it seems none of us copied that information to github. Sorry about that.
We would rather have flash rate relative to a configurable base rate. Changes would be these:
- Units are changed to "unitless."
- Long name explains it is relative to a base rate.
- New variable contains the base rate in flashes per second.
- Lightning code is updated to scale to the base rate. It does this by dividing the requested base rate by flashes per five minutes.
- Model requests a base rate of flashes per five minutes.
This should result in multiplying the value by 1 inside the lightning diagnostic code, which should not change the results. It has the advantages of being consistent, intuitive, non-answer-changing, and invisible to the post-processing.
We discussed this in a CCPP meeting, but I though we ultimately decided to do what Dom did here.
I don't love this solution as we are introducing another variable just because of a wonky unit. I think a more robust solution would be for the scheme to output fields in standard units (flashes min-1) and to introduce a new variable attribute in the framework, say scale_factor_out=0.2, that is applied in the cap. Let me see if I can make this happen today
We discussed this in a CCPP meeting, but I though we ultimately decided to do what Dom did here.
That isn't my recollection. We've clearly shown the need to discuss solutions on github. I'll try to be better about that in the future.
I don't love this solution as we are introducing another variable just because of a wonky unit. I think a more robust solution would be for the scheme to output fields in standard units (flashes min-1) and to introduce a new variable attribute in the framework, say scale_factor_out=0.2, that is applied in the cap. Let me see if I can make this happen today
I like this. A scale factor is the ideal solution.
Doesn't the UFS at least allow you to scale output in the diagnostics code? If so, then you could also switch to flashes min-1 in ccpp, and have the GFS_diagnostics code scale this to 5 minutes (and set the units/attributes there)?
From GFS_diagnostics.F90
:
!---------------------------------------------------------------------------------------------!
! DIAGNOSTIC_METADATA !
! ExtDiag%id [integer ] switch to turn on/off variable output !
! ExtDiag%axes [integer ] dimensionality of variable (2 or 3) !
! ExtDiag%time_avg [logical ] bucketed accumulation time average !
! ExtDiag%time_avg_kind [char*64 ] time average period !
! ExtDiag%mod_name [char*64 ] classification of the variable !
! ExtDiag%name [char*64 ] output name for variable !
! ExtDiag%desc [char*128] long description of field !
! ExtDiag%unit [char*64 ] units associated with field !
! ExtDiag%mask [char*64 ] description of mask-type !
! ExtDiag%intpl_method [char*64 ] method to use for interpolation !
! ExtDiag%cnvfac [real*8 ] conversion factor to output specified units !
! ExtDiag%data(nb)%int2(:) [integer ] pointer to 2D data [=> null() for a 3D field] !
! ExtDiag%data(nb)%var2(:) [real*8 ] pointer to 2D data [=> null() for a 3D field] !
! ExtDiag%data(nb)%var21(:) [real*8 ] pointer to 2D data for ratios !
! ExtDiag%data(nb)%var3(:,:) [real*8 ] pointer to 3D data [=> null() for a 2D field] !
!---------------------------------------------------------------------------------------------!
Does the GFS_diagnostics scaling affect data sent to the inline post? Both the inline and offline posts have to produce the same output.
O yeah. I like the idea scaling the diagnostics outside of the physics.
Does the GFS_diagnostics scaling affect data sent to the inline post? Both the inline and offline posts have to produce the same output.
That I don't know - but we can find out.
If you know off the top of your head with regression test exercises inline post for the data in question, please let me know. I can make the change and test it real quick.
It looks like some or all of the regional_*
tests output that field.
I don't know if there is any lightning threat data in those files though. With a low resolution or no storms, you could get 0 lightning threat across the entire domain. No matter how you scale 0, you send up with 0.
Those tests are useless to us. The entire lightning threat field is undefined:
1333:43881363:vt=2022082406:entire atmosphere:5-6 hour max fcst:LTNG Lightning [non-dim]:
ndata=61440:undef=61440:mean=0:min=0:max=0
grid_template=30:winds(N/S):
Lambert Conformal: (320 x 192) input WE:SN output WE:SN res 0
Lat1 35.299999 Lon1 234.750000 LoV 240.000000
LatD 38.000000 Latin1 38.000000 Latin2 38.000000
LatSP 0.000000 LonSP 0.000000
North Pole (320 x 192) Dx 3000.000000 m Dy 3000.000000 m mode 0
EDIT: This line tells us they are undefined. The number of undefined points (undef=61440) equals the number of points (ndata=61440)
ndata=61440:undef=61440:mean=0:min=0:max=0
Maeh.
Well, then I will need to ask you to test the diffs between two commits on my branch for a useful case, sorry ... I should have the code ready later today (but no need to rush, this can wait until next week).
I asked @junwang-noaa about the inline post and she confirmed my expectations (thanks very much for that!)
A question has come up with regards to GFS_diagnostics.F90 and inline post. If a variable is scaled by a factor in GFS_diagnostics.F90, will that transpire to inline post? In other words, does inline_post run on the extdiag DDT or does it run on the internal GFS DDTs (I assume the former, but figured you know better)
Yes, it's extdiag DDT, which is used in the output file and works as the interface to post (inline or offline POST).
The correct solution then is to:
I'm constructing a test case we can use for this. It'll be a real-time RRFS run which I'll copy to Hera.
@SamuelTrahanNOAA The code is ready.
The ufs-weather-model commit before the rescaling change is d8d13e21f564001df59db9e5c87e14289aa3737b and all the submodule pointers are set correctly.
The ufs-weather-model commit after the rescaling change is 49e4625fd9b1786790e1bd15cde775ca0bbbd77f and likewise all the submodule pointers are set correctly.
I ran the regional_*
tests in rt.conf
with both versions, and they both match the official baseline. I know that this isn't super helpful, since the fields in question are zero everywhere, but at least it means that the code is otherwise correct.
I have a test case here which works with the top of develop:
Now that I have Dom's hashes, I'll test those.
The lightning output is completely different. It isn't off by a factor of five, as one may expect. Instead, it's like I'm looking at a different product.
The lightning variables are accumulated quantities. I expect that's the reason for the discrepancy. I'll try a tweak and let you know if it works.
Thanks for your efforts @SamuelTrahanNOAA. Please let me know if I can help.
I have a new set of changes that work. The scaling has to happen deeper in the code for the units to be consistent. I'll do a PR to this branch soon.
Also, I'd like to do a separate PR to develop with those changes, and add lightning threat to the conus13km tests.
EDIT: I just noticed your PR is entirely for these changes.
It turns out the conus13km tests have non-zero lightning threat data in them, at least on Hera. I'll run the regression tests to confirm they're the only ones with changed results.
My corrections are here:
Please update the PR description after merging my PR. The units coming out of the algorithm are now genuinely flashes per minute.
This sentence:
In physics/Interstitials/UFS_SCM_NEPTUNE/maximum_hourly_diagnostics.meta: change units flashes 5 min-1 to flashes min-1 and update long name to make clear this is per 5 minutes.
Should be something like:
In physics/Interstitials/UFS_SCM_NEPTUNE/maximum_hourly_diagnostics: output flashes per minute (
flashes min-1
) instead of flashes per 5 minutes (flashes 5 min-1
). Model will convert this back to flashes per 5 minutes if needed for output.
And you can remove the quotes around "Fix" in the title now that we're properly fixing the problem.
I'm running regression tests on the combination of this branch and my PR to this branch.
With the combination of this branch and my PR to this branch:
Please merge my PR https://github.com/climbfuji/ccpp-physics/pull/16 to this branch so we can move forward with the commit process.
Description
In
physics/Interstitials/UFS_SCM_NEPTUNE/maximum_hourly_diagnostics.{F90,meta}
: change calculation and metadata unitsflashes 5 min-1
toflashes min-1
- credits to @SamuelTrahanNOAA for contributing the code changes.See https://github.com/NCAR/ccpp-physics/issues/1047 for more information.
Associated PRs:
https://github.com/NCAR/ccpp-framework/pull/546 https://github.com/NOAA-EMC/fv3atm/pull/796 https://github.com/ufs-community/ufs-weather-model/pull/2181
Testing
See https://github.com/ufs-community/ufs-weather-model/pull/2181