Closed jonbob closed 6 years ago
Passes nightly regression suite with gnu/opt and intel/debug.
@mark-petersen, @maltrud and @jonbob, I searched the code for landIceMask
and didn't see anywhere it's being applied that would affect salinity restoring. This isn't good news, since it means we're doing restoring (probably to zero) under ice shelves.
Giving it some thought, I think we want a config option config_salinity_restoring_under_land_ice
that defaults to .false.
and we want to disable salinity restoring wherever landIceMask == 1
if config_salinity_restoring_under_land_ice == .false.
.
@jonbob, let me know if you'd like me to make the change or if you can do it.
@xylar I can work on this too, but not until this afternoon. I looked at the code and agree that we appear to be restoring salinity below ice shelves. Sorry about your parameter study work so far. At least we found it now.
Please don't merge this PR until @vanroekel and I have had a chance to look it over carefully. I have added us as reviewers.
Definitely. I would appreciate a review and testing on what I just added.
I tried to make the logic more straight forward, and reduce the if statements within the cell loops. I also added a check on the land ice mask right there in the salinity restoring. If we tried to just rely on the piston velocity to turn of restoring below ice shelves, the global correction would be a little off for the ice shelf cases.
Note: We need to turn this to true in ACME by default. This will not affect B cases, only G cases where salinity restoring is on:
config_salinity_restoring_under_sea_ice = .true.
@xylar I had put this flag in originally to do salinity restoring under all sea ice as you say (even iceFraction = 1). I think it has been taken over to serve the purposes here. I agree it is not the most straightforward name for what the flag does. I don't think for production runs we would ever not restore under partial sea-ice. In my opinion, if the design document says we will restore according to 1-iceFraction, then I would say we shouldn't have that flag at all.
@mark-petersen, the latest 2 commits seem like they do the correct thing. Thanks for updating.
One suggestion would be to run the code through the linter. Some lines look long and some indentation seems off (not sure the linter catches the latter).
I changed it to be: config_salinity_restoring_under_sea_ice = True
config_salinity_restoring_under_sea_ice = False (default in MPAS and ACME)
I think that was @vanroekel's original intention. I am running a test now. Will run in linter.
Delinted. This is ready for final review, and I have a case running on cori now. I squashed all my work to a single commit.
@philipwjones That reduction in the openMP pragma:
!$omp do schedule(runtime) private(deltaS) reduction(+ : sumAreaDeltaS, sumArea)
Makes sense, but I then get the error
Error: reduction variable 'sumareadeltas' is private in outer context
Error: reduction variable 'sumarea' is private in outer context
I assume because both threads declare those variables as private at the top of the routine. Do I need to change the declaration line
real(kind=RKIND) :: sumAreaDeltaS, sumArea
somehow? We have other accumulating sums in the code, particularly in analysis members. I wonder if they are all not thread safe.
@mark-petersen: Aargh, sorry about that. The compiler is correct and you can remove the reduction clause. I keep forgetting how the MPAS threading works... Though in this case, it actually helps with b4b.
So I can go back to this?
!$omp do schedule(runtime) private(deltaS)
I don't get it. If sumAreaDeltaS
and sumArea
are private, that means each thread starts the subroutine and has it's own copy, right? How do we get a sum across all cells to be correct - don't we need to sum those private variables across threads? I must be missing something.
Here's another thought. Before this PR, that iCell loop had no openMP pragma whatsoever. That means that all threads executed the whole iCell loop, so each thread had a private and correct copy of sumAreaDeltaS
and sumArea
, even though it was inefficient. The question is then whether to leave it, or to add an OMP pragma and still get the correct sum.
Yeah, let me take a closer look this morning - trying to do this too fast without thinking enough.
So, all this is much ado about nothing. This routine isn't called from a threaded region and there should be no pragmas. And if it were, it would give incorrect answers because the global sum only returns the actual sum for thread 0.
Testing now within E3SM.
The last commit ran in E3SM for two months, EC60to30wLI, here:
/global/cscratch1/sd/mpeterse/acme_scratch/cori-knl/a25f/run
We need a spatial plot of salinitySurfaceRestoringTendency
and check sum is zero. I don't have time right now, in case someone else wants to. If that checks out, I think we are ready to merge.
@mark-petersen results look good. Below are some figures from your test. The coloring is the salinity restoring tendency and the black lines are iceFraction contours (0.2,0.4,0.6,0.8) The magenta line is the iceFraction = 0.99 line. The area weighted global mean for both months is ~1E-15. Things look good to me.
A agree that the results look right to me. @maltrud, I think we're awaiting your review before we merge.
Update salinity restoring to work as follows:
config_salinity_restoring_under_sea_ice = True
config_salinity_restoring_under_sea_ice = False (default in MPAS and ACME)
In addition, the code logic was simplified to be straightforward, and landIceMask is checked so salinity restoring never occurs below ice shelves.