E3SM-Ocean-Discussion / E3SM

Ocean discussion repository, for ocean issues and longer-term pull requests for E3SM source code. Please make pull requests that are ready to merge into https://github.com/E3SM-Project/E3SM
https://e3sm.org
Other
1 stars 0 forks source link

Sensitivity of Redi to critical slope parameter #70

Open vanroekel opened 1 year ago

vanroekel commented 1 year ago

This is a new issue that builds from a comment by @dengwirda in #65 -- MPAS-Ocean implementation of Redi is sensitive to the critical slope parameter for the parabolic bowl test case even when the slope is less than the max value. For example, the test case has a slope of 0.005 but the solution is sensitive to a change in critical slope from 0.1 to 0.5 when it should not be. Here is the relevant comment from @dengwirda

follow-up on the slope calc. aspects here, there currently appears to be a (seemingly unreasonable) dependence on the slope limit (config_Redi_maximum_slope) in the parabolic bowl test case --- this problem should setup simple linearaly sloping isopycnals via an uncontroversial stratification, and the slope limit should not be necessary to maintain stability. It should therefore be possible to set a larger value of config_Redi_maximum_slope than the actual slope defined in the test case and obtain consistent output. (e.g. if the defined isopycnal slope is 0.005, setting config_Redi_maximum_slope = 0.01, 0.05, 0.10, 0.20, etc should give the same answers).

Currently though, the slope limit appears to immediately become active, and both it and the monotone limiter in https://github.com/E3SM-Ocean-Discussion/E3SM/pull/53 are needed to keep the simulation on the rails. To me this suggests revisiting the slope triad formulations re: instabilities.

There are two things that I've tried:

The safe_slope function included in this PR, which bounds the slopes onto [-1,+1] and prevents div-by-zero errors while removing the need for a small fp-tolerance, which in itself seems to be responsible weird behaviour around 0.
Evaluating the slope triads at the top and bottom layer interfaces rather than the layer midplanes, as per: https://github.com/dengwirda/E3SM/blob/6c9aa01fdd11981f699b0d48c54665389cef9d66/components/mpas-ocean/src/shared/mpas_ocn_gm.F#L332. This averaging does appear to significantly improve the instability in the parabolic bowl case (allowing to run stably with a larger config_Redi_maximum_slope than the isopycnals themselves). It's unclear whether this change is necessarily compatible with the slope triad definitions themselves though.
Overall, this behaviour makes me unconfident re: the slope formulation in general, and suggests a further look.
vanroekel commented 1 year ago

@dengwirda I took a hard look through the slope calculation and the tracer_hmix_redi routine including working things through paper and pencil. Everything checked out

I then ran a simulation with no slope taper and saw that salinity and temperature stay fixed through the 6 months of the run (but I did include your quasi monotone as on)

I also flipped on the slope limiter next and the results stayed identical. These tests included my new limiting approach, so perhaps we are okay now? I forgot if you ever tried your slope sensitivity tests with my limiter PR.

dengwirda commented 1 year ago

@vanroekel sounds good! As you say, the sensitivity to the slope limit that I saw was with the previous approach to tapering/limiting, so it's encouraging to know those changes of yours have improved things on this front.

vanroekel commented 1 year ago

well shoot, but my initial assessment was wrong. There is still sensitivity @dengwirda -- My plots from earlier today were incorrect. I reran with analysis fixes and there are definite differences and they emerge at the bottom. What I see is that the bottom slope starts to grow (don't understand why) and this causes the sensitivity. I"m looking into this a bit further now.

vanroekel commented 1 year ago

Seems like perhaps this is a stability issue. I reran the tests with a timestep of 5 minutes instead of 30minutes and the sensitivity to slope disappeared. I'm using rediKappa = 3600 so perhaps that is no surprising, but dc is 40,000m so I didn't expect a stability issue.

vanroekel commented 1 year ago

for comparison, here is the parabolic bowl salinity after 2 months for dt = 30 min

image

and here is the same configuration except dt = 5min (again at 2 months)

image
dengwirda commented 1 year ago

Okay, interesting. I suppose it's possible kappa = 3600 is exceeding a CFL-type limit, though there would only be a factor of 2 involved beyond the kappa = 1800 at 30km that's been used in real runs, so I hope this isn't the case...

Is this using a linear or nonlinear EoS? There will be a small feedback in the nonlinear case which may cause a drift away from the linear isopycnals I believe.

There does appear to be some small kinks in the contours near bathymetry with dt = 5min too, so I wonder if there's still an issue to pursue re: the masking?

vanroekel commented 1 year ago

Yes definitely agree there is still something funny around bathymetry. I've had a hard look at the masking and it seems to be doing as expected. I've stepped through slopeTriadUp and slopeTriadDown a timestep at a time with and without limiting and it seems to be right, but one thing that surprised me (but maybe shouldn't?) is that the masking of bathymetry is always the same index (index 1).

One place I'm not super certain of yet is the masking interplay with k33. I haven't yet convinced myself that that is doing the right thing near the bottom. I'll have a look at that next

vanroekel commented 1 year ago

@dengwirda and @mark-petersen I've been poking at this bottom sensitivity issue and think I made some progress. Here is a run with a couple of changes (notably vmix inside supercycling) 3months simulation 30min timestep

image

here is the original as a reminder (same length, same timestep)

image

certainly is significantly better, but this requires a lot more calls to vmix. I was thinking a possible next step is to try this test with Hyun's AB2 option as it gets rid of the supercycle loop.

vanroekel commented 1 year ago

Also just to note, it's still not perfect, there is some noise along the right bottom boundary, but this is a big step it seems

vanroekel commented 1 year ago

sorry one further note -- the runs I did in the comment just above where the T&S look much better do not include the quasi monotone limiter and slope limiting. So it is running with no limiting at all. without vmix in the supercycle loop the run crashes pretty quickly

dengwirda commented 1 year ago

Interesting @vanroekel! It makes sense to me that lagging updates to the K33 part of the tensor may introduce some asymmetry, and it looks like these results confirm that instability can come from that. Hopefully this explains the dt=5min vs dt=30min differences above, which had definitely seemed strange.

I believe that even the RK4 implementation hoists vmix outside of the substage updates, so I suspect this time-splitting error will be common across all of the MPAS config.'s, except the AB2 solver as you say.

If I remember correctly, we didn't see a big change in global runs moving vmix inside the split-ex cycle, but perhaps with so many other changes it's worth revisiting that?