Closed mondus closed 1 month ago
Suggestion from Jack is that we might need to clone CESM/CAM to integrate this. The danger of this is that we then need to maintain a fork unless we go down the route of asking the atmospheric working group to integrate our code (which we believe would be the plan if the performance was good).
It is not clear at this stage if we need to change anything in CESM to integrate our model. Paul and Judith can probably help us to understand this and we will discover this as we go.
If we do not need to change CESM code then we should be able to work locally with a clone of CESM (at a particular version), with this git repo cloned within at the appropriate place within SCAM/CAM.
With respect to build systems we have discussed the idea of CMake being used within development for the purposes of testing (#32). For any release version of the code we should deploy a stand alone make file to avoid CMake/PFUnit dependencies for the CESM build.
We want to be able to call both the old deep convection scheme (Zhang-McFarlane) and the new scheme at the same time. This work could also replace shallow convection (as well as deep convection); therefore Therefore the place we are going to call our code form is the grid physics module.
Comments from Paul taken from #38 regarding testing variable conversion between CAM and SAM:
Here is some brainstorming of additional tests that might be useful:
- Make sure that the transformation (qv,qcc,qii,tabs)<->(q,t) returns the same values when we use it in the forward direction followed by the reverse direction
- Implement the new parameterization with CAM type inputs (qv, qcc, qii, tabs) in SAM and see that it gives similar results to the original implementation.
- For a few sample temperatures and values of q (some high q so there is cloud and some low q so there is no cloud), calculate the transformation (q,t)->(qv,qcc,qii, tabs) and compare the results with an independent calculation using equations in the appendix of the original SAM paper (https://doi.org/10.1175/1520-0469(2003)060<0607:CRMOTA>2.0.CO;2)
Experiencing issues building on CSD3.
Successful build on Cheyenne now we have obtained project number (Thanks to Marika at NCAR).
(need to append --project <ProjectNumber>
to ./create_newcase
command, see here)
Have successfully built and run gateIII benchmark.
Attempted to modify the source in components/cam/src/physics/cam/convect_deep.F90
to call a dummy routine YOG
instead of ZM
and write to the model run logs that appear in /glade/scratch/$USER/archive/<TestCase>/logs/cesm
.
However, even when we try and modify the namelists they are reset during the build process. I think we need to look for a netcdf file where these parameters are being read from during the setup/build process.
Useful information here: users_guide - atmospheric-configurations
Further useful details from @judithberner
[re the ML-deep convective parameterization]: the input is a state and the output is a deep convective tendency. In this case we will call your routines from physpkg.f90 and replace X_tend and X_tend2. We would keep the X_register, X_init, X_tend routines since they add the necessary fields to be buffer pbuf (X_register) and initialize them to zero at the beginning of the timstep ( X_init), so we would need to replace X_tend and X_tend2.
The project is sufficiently underway that it makes sense to close this. For reference, after some confusion around using source mods and editing the ZM scheme, we have settled on using a fork of CAM. This is in CAM-ML on the M2Lines github and can be pulled into a CESM build using the externals. We have also implemented YOG as its own scheme separate from the deep convection as recommended by @paogorman .
We should plan and understand the integration between the neural-network-based convection parameterisation, CESM, SCAM /CAM.