Open har917 opened 3 months ago
Sounds to hard (and perhaps un-necessary) to have a steadfast plan. Although this might be ideal, I strongly believe that any plan we adopt will quickly unravel into a case based scenario. Perhaps if we gather experience as time goes on. For the moment we are going to have to involve/rely on the developer a lot. e.g. GW, POP. There are more self contained models like Soil Colour Albedo, but these are probably more the exception than the rule.
From my point of view, requiring developers to ensure their developments work in conjunction with other options means we need to provide developers with the tests for that. So we can have a plan to get more extensive testing coverage to go towards ensuring new developments work with various applications of CABLE.
But I don't think we can ever require CABLE developers to 1. be experts in all the science/tech aspects of CABLE, and 2. know how to run CABLE in all its applications (beyond the testing framework). This would mean we can't ever require that new developments work with a given set of applications of CABLE. But we can test whether it seems to work and we can use the documentation to note specific problems.
We can also look at this from a different perspective and use standard, maintained configurations here at least for the standalone part of the problem. We could say any combination of science options that is different from the standard configurations needs evaluation by the user before any science work is based on it, typically there is no guarantee it will work. This could possibly be extended to ACCESS configurations but it would take longer to get there.
I'm conscious that testing will pick up whether things run/break - not whether the output still makes sense.
Perhaps we need to attempt to implement a JULES/UKMO-lite process whereby science leaders who do have knowledge of those parts of the code that intersect are required to comment/review on developments aimed for MAIN? That would be in addition to the formal testing.
We are going to implement a science review step. This was discussed at a Land Working Group meeting. Just have to find the time to get volunteers, implement it etc.
The title is a bit obscure but I thought it worthwhile to initiate a discussion on how the community expects to handle a key aspect of community model development - intersecting areas of capability.
The CABLE model has evolved over time into a system of submodules largely oriented around canopy energy balance, soil energy and water balance, and the carbon cycle with key points of intersection between them. We carry a set of overlapping parameterisations in some areas and in the soil submodule space we now have three major options - soil_snow, SLI and the GW model (which somewhat wraps around soil_snow).
My question/concern is - what level of support can we as a community provide developers who are working in areas that link into/across these different parameterisations and/or submodules?
To provide a specific example - the GW model is being extensively reviewed/revised, this has focussed on setting up the GW model as a replacement for soil_snow and ensuring that the new parameterisation schemes function as required within a particular configuration. Should we be expecting that this development (or at least the new parameterisations and the aquifer/water table part) also function with SLI?
What about any developments that impact the technical details of the coupled models ACCESS and CCAM?
Requiring that developments work with/into 'all' possible configurations and applications of CABLE will likely create too much of a burden (and delays), especially on substantive developments - individuals may not have the science expertise to understand how the other parts of the model function. It also requires that the testing environment be expanded.
Equivalently not requiring this at the point of first development will create i) additional work (likely by others) at a later time and ii) a need to track which combinations of options can and can't be used together.
The ultimate risk here is that some science capability ends up becoming obsolete simply because the connections into the rest of the model aren't being maintained properly - even if it represents 'better science'. Notably there are strategic decisions (e.g. whether to adopt SLI in CCAM) that depend on this.
@tha051 @rml599gh @mcuntz @ccarouge @rkutteh @bibivking @juergenknauer @JhanSrbinovsky @aukkola - any thoughts/reactions?