ESCOMP / CAM

Community Atmosphere Model
74 stars 136 forks source link

CLUBB/CAM interface bug fixes #336

Closed adamrher closed 2 years ago

adamrher commented 3 years ago

A series of bugs have been identified in clubb_intr.F90. They fall into three main categories:

Incorrect definitions of variables throughout clubb_intr. The correct equations are as follows:

Namelist not used for convection area fraction calculation, and are hard coded instead.

Outfld variables that are incorrectly assigned to the physics grid (i.e., ilev). These vars are on the lev grid.

Experiments performed last year using new definitions of thv_ds_zt and rho_ds_zt do not impact the climate, but I still need to do a run with all the above changes to assess any impacts.

@PeterHjortLauritzen @JulioTBacmeister @mikaelwitte @tto061

tto061 commented 3 years ago

I suggest the following: 1) density: keep layer-mass based definition: rho_ds_zt(k+1) = (1._r8/gravit)(state1%pdel(i,pver-k+1)/dz_g(pver-k+1)) (1._r8-state1%q(i,pver-k+1,ixq)) 2) thv is not needed at all before the call to CLUBB, so does not need to be calculated 3) vertical velocity: the b.c. dz/dt should be applied at CLUBB surface momentum level, zm(1). Hence I suggest to insert, afer line: wm_zm = zt2zm_api(wm_zt) the do block do k=nlev,1,-1 wm_zt(k) = wm_zt(k)-wm_zm(1) wm_zm(k) = wm_zm(k)-wm_zm(1) enddo In my tests the impact of any of these changes on the simulations is minimal. Thomas

adamrher commented 3 years ago

Hi Thomas,

Re: 2. I agree that thv is not needed before clubb call anymore and so should be eliminated. But we do use this definition in CLUBB+MF, which corresponds to tag cam6_3_006 or later. So I may bring it down closer to where the MF's are called ... And I do think that condensate loading is necessary for our MF implementation. The CLUBB+MF implementation in that tag is a preliminary nuts and bolts version so don't be too rough on me :). I have a branch that has all the bells and whistles and bug fixes.

Re: 3. Just to clear up the role of the subsurface ghost point on the mid-point grid. Vince indicates this point isn't really used for .. well ... much of anything. In the state vector x on the LHS of Ax=B, psi_zt_t+dt(k=1) = psi_zt_t(1). And then subsequently this gets overwritten by psi_zt_t+dt(1) = psi_zt_t+dt(2) after solving the matrix. So when initializing the arrays to send to CLUBB, I've been setting psi_zt_t(1) = psi_zt_t(2), and you can see this is done throughout clubb_intr. So if you are assuming wm_zt(2) = 0, then wm_zt(1) = 0 by convention. And when you then interpolate wm_zm = zt2zm_api( wm_zt ), the interpolation is simply wm_zm(1) = 0.5 wm_zt(1) + 0.5 wm_zt(2) and so this ensures that wm_zm(1) = 0.

All that to say, I think we only need to replace the code block for wm_zt to be:

do k=1,nlev wm_zt(k+1) = -1._r8(state1%omega(i,pver-k+1)-state1%omega(i,pver))/(rho_in(k+1)gravit) enddo wm_zt(1) = wm_zt(2)

I agree that it is rather strict to assume dz/dt = 0 for the first midpoint level above the surface. But your experiments indicate this doesn't really have an impact ... And I do agree that it makes sense to do this since this addresses the lack of any constraint on dz/dt at the surface. The side-effects is that we are forced to assume the mean advection terms of the 2nd order moments equations (and the w'^3 equation) are always negligible at the lowest model level. I don't have any insight into how reasonable this is as an assumption.

tto061 commented 3 years ago

Hi Adam, thank you . What you say makes sense to me. We certainly should keep thv as needed for MF. I share your doubt on the last point. My experiments are just short 5-day test with f19, so maybe better not to put too much weight on them. I imagine that with ne120 maybe wm can become more important. So yes I realised that wm_zt(2)=0 => wm_zm(1)=0, but perhaps not the other way, and by removing wm_zm(1) instead perhaps we can retain some wm_zt(2) effects (whatever they turn out to be) in the moments equations.

adamrher commented 3 years ago

by removing wm_zm(1) instead perhaps we can retain some wm_zt(2) effects (whatever they turn out to be) in the moments equations.

I think this is probably the safer approach. So are you saying we should retain the original calculation of wm_zt, but then zero out wm_zm(1) after wm_zm = zt2zm( wm_zt )?

tto061 commented 3 years ago

yes.

But actually the disadvantage there is that there will be a dependence of wm_zm(1) on the clubb interpolation and therefore also on the arbitrary ghost value wm_zt(1): e.g. if you set wm_zt(1)=wm_zt(0), the result would be the same as in our suggestion since all three values wm_zt(1:2) and wm_zm(1) would be identical.

With wm_zt(1)=0., we would subtract half of the pressure tendency at tracer level 1. This is actually worse since that tendency would be close to the true dPS/dt.

So I should modify my proposal by adding also the change wm_zt(1)=0. -> wm_zt(1)=wm_zt(2) with the same result that you get in your more straightforward modification of wm_zm.

The advantage might be that perhaps in future omega might be defined were one needs it, i.e. on interface levels. It is perverse that dp/dt is now explicitly vertically averaged to produce a notional omega on full levels.

Once omega were fixed, wm_zm would be available as input to CLUBB and removing wm_zm(1)=omega(plev) would remain the right thing to do.

Actually we could still back out dPS/dt now by taking 2.*[omega(plev) - omega(plev-1) + omega(plev-2) - ...] since dp/dt=0 at the top, and subtract that from wm insted of wm(1) or omega(1), but that would perhaps look a bit bizarre?

More practically, perhaps your solution is better.

In principle we should change that omega.

Thomas Toniazzo Meteorologiska institutionen (MISU) Arrhenius Laboratory, Svante Arrhenius Väg 16C Stockholms universitet SE-106 91 Stockholm Sverige tel. +46 (0)8 16 2416

On 2021-02-27 16:27, Adam Herrington wrote:

by removing wm_zm(1) instead perhaps we can retain some wm_zt(2)
effects (whatever they turn out to be) in the moments equations.

I think this is probably the safer approach. So are you saying we should retain the original calculation of wm_zt, but then zero out wm_zm(1) after wm_zm = zt2zm( wm_zt )?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESCOMP/CAM/issues/336#issuecomment-787089858, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADZGLJEUZESXWQHQJIHA3X3TBEFOZANCNFSM4YFLG2XA.

adamrher commented 3 years ago

OK. I think I follow. So the real problem is we don't have dps/dt, which means our calculation of wm won't be exact. I think we have dps/dt for se dycore, but i guess not the fv dycore. I'm a little weary of trying to reconstruct this from the omega. Can we be sure that deriving dps/dt in the physics will be consistent with all hybrid-pressure based dycores in CESM? I suppose when MPAS comes in (z-coordinate) we can put in an if statement so we don't need to bother with this conversion. Either way the dpsdt issue seems beyond the scope of this specific git-issue.

I will note that I know of no other physics routines that requires resolved vertical velocities (do prior versions of CAM compute tke using wm?)... so (for now) it seems to me this us a clubb specific problem.

adamrher commented 3 years ago

@mkurow wonders if we should use density temperature for thv that includes condensate loading from all species (cldliq, cldice, rain, snow, graupel). So thv = T/exner(1. + (Rv/Rd - 1.)qv - ql - qi - qr - qs - qg).

Whatever we go with it should be consistent with how density temperature is computed elsewhere in the code.

adamrher commented 3 years ago

Hi @tto061, Im putting together a branch with these mods. After reading through our discussion I think we decided that the least worst solution was to just keep it as is, but zero out wm_zm(0). That is,

  !  Compute mean w wind on thermo grid, convert from omega to w 
  wm_zt(1) = 0._r8
  do k=1,nlev
     wm_zt(k+1) = -1._r8*state1%omega(i,pver-k+1)/(rho_in(k+1)*gravit)
  enddo

... wm_zm = zt2zm_api(wm_zt) wm_zm(1) = 0._r8

Which allows us to retain some non-zero value for wm_zt(2), but enforcing dz/dt=0 for wm_zm(1). I think the way it zero's out the subsurface wm_zt(1) seems more sensible than the usual convention of setting it to wm_zt(2).

I'm running 5 year F2000climo and will link the diagnostics here if you want to see 'em.

adamrher commented 3 years ago

I am seeking help from the SEs. I am having problems importing a cam5 macrophysics namelist into clubb_intr. At the top of clubb_intr, I add:

use cloud_fraction, only: cldfrc_dp1, cldfrc_dp2

After having made these vars public in cloud_fraction.f90. Now when I use these namelists in clubb_intr in the expression:

deepcu(i,k) = max(0.0_r8,min(cldfrc_dp1*log(1.0_r8+cldfrc_dp2*(cmfmc(i,k+1)-cmfmc_sh(i,k+1))),0.6_r8))

It is doing really weird things. For 20day GATE SCAM case, it makes my time-dimension for the i/o on the order of a million. I have a feeling it's doing weird things in an F2000climo run since the ITCZ looks really really cloudy, which is where this variable deepcu (convective area fraction for radiation) is very active. [edit: when I print out the values of cldfrc_dp1, cldfrc_dp2 to the log, they are as expected from namelist_defaults.xml]

My attempts to define a real(r8) :: dp1 = cldfrac_dp1 is giving me a build error saying this can't be set (I can dig up the actual error if you think it is relevant). Any tips would be appreciated.

cacraigucar commented 3 years ago

@adamrher You need to use the variables which have been mpibcast to every MPI task. That means you need to use dp1 and dp2. These are the variables to which your namelist are assigned and then broadcast to all tasks. Only the base task (masterproc) knows about the actual namelist variables.

adamrher commented 3 years ago

@cacraigucar that makes sense. Thanks. Weirdly, this did not git rid of the issue where my time dimension is like a million for my GATE scam case. It is clearly doing that b/c of this line:

deepcu(i,k) = max(0.0_r8,min(dp1*log(1.0_r8+dp2*(cmfmc(i,k+1)-cmfmc_sh(i,k+1))),0.6_r8))

I'll continue to debug.

cacraigucar commented 3 years ago

What was the solution to this problem? Ssince a PR has been opened, I assume this bug has been fixed.

adamrher commented 3 years ago

Sleep? Im not quite sure but I was able to run a F2000climo and FSCAM without any of the aforementioned issues using dp1, dp2 in clubb_intr.