Open ekluzek opened 6 years ago
Erik Kluzek < erik > - 2011-03-02 13:01:29 -0700
After discussing this with Sam, we think the issue with interpinic is that it will need to map the pft weights. Whereas the way interpinic works now is to only map that for which the pft weights are non-zero and leave the pft weights as they are. What will need to happen is that the pft weights will need to be mapped as well as the other variables.
Sam Levis < slevis > - 2012-06-25 13:07:59 -0600
I post this info in case we decide that want to pursue this fix.
Guiling Wang (UConn) got interpinic to work with CNDV. She sent me her interpinic.F90 and BiogeophysRestMod.F90 with the following email dated 5/30/2012, which I have filed in my /clm/interpinic email folder:
Hi Sam,
This is the interpinic.F90 I modified. In addition to the changes to make sure that all 17 PFTs at an individual grid cell are from the same grid cell, I also needed to take several variables out of the list of cycled variables. I therefore need the corresponding change in BiogeophysRestMod.F90 (the segment of the code with "EaSM" labelled). I think it is good to keep these changes in BiogeophysRestMod.F90 for future releases: (1) Otherwise the model does not work if PFT_ variables are not cycled in interpinic. In this case the changes are absolutely necessary. (2) If the PFT_variables are cycled, then the changes I added to BiogeophysRestMod won't make any difference. So the added portion works either way.
FYI, most of the changes are indicated with EaSM, either at the end of the line, or at the beginning and end of a segment of code.
Please let me know if you spot any problem.
Thanks, Guiling
Bill Sacks < sacks > - 2015-09-18 11:46:50 -0600
In principle, this same problem would apply to any aspect of dynamic landunits / columns / patches that is generated internally by CLM. So far, I believe that would just apply to ED: all other aspects of the dynamics are either read from file (transient crops & PFTs) or come from another component (dynamic glacier area).
Bill Sacks < sacks > - 2015-09-18 11:48:48 -0600
I also noticed that the allPFTSfromSameGC flag that used to be present in interpinic is no longer present in the clm4.5 and later initInterp. I believe that flag related to the operation of CNDV. So something like that may need to be brought back if we want interpinic to work for CNDV.
Bill Sacks < sacks > - 2016-11-04 13:49:10 -0600
This is also somewhat of an issue for glaciers, for which the area comes from GLC, since the fields from GLC aren't available at initialization.
I'm working around that issue with some special-purpose code for glaciers at the start of the driver run loop, like this (in clm_driver.F90):
! ========================================================================
! In the first time step of a run that used cold start or init_interp, glacier areas
! will start at whatever is specified on the surface dataset, because coupling fields
! from GLC aren't received until the run loop. Thus, CLM will see a potentially large,
! fictitious glacier area change in the first time step after cold start or
! init_interp. We don't want this fictitious area change to result in any state or
! flux adjustments. Thus, we apply this area change here, at the start of the driver
! loop, so that in dynSubgrid_driver, it will look like there is no glacier area
! change in the first time step.
!
! This needs to happen very early in the run loop, before any balance checks are
! initialized, because - by design - this doesn't conserve mass at the grid cell
! level. (The whole point of this code block is that we adjust areas without doing
! the typical state or flux adjustments that need to accompany those area changes for
! conservation.)
!
! This accomplishes approximately the same effect that we would get if we were able to
! update glacier areas in initialization. The one difference - and minor, theoretical
! problem - that could arise from this start-of-run-loop update is: If the first time
! step of the CESM run loop looked like: (1) GLC runs and updates glacier area (i.e.,
! glacier area changes in the first time step compared with what was set in
! initialization); (2) coupler passes new glacier area to CLM; (3) CLM runs. Then the
! code here would mean that the true change in glacier area between initialization and
! the first time step would be ignored as far as state and flux adjustments are
! concerned. But this is unlikely to be an issue in practice: Currently GLC doesn't
! update this frequently, and even if it did, the change in glacier area in a single
! time step would typically be very small.
!
! If we are ever able to change the CESM initialization sequence so that GLC fields
! are passed to CLM in initialization, then this code block can be removed.
! ========================================================================
need_glacier_initialization = (is_first_step() .and. &
(is_cold_start .or. is_interpolated_start))
if (create_glacier_mec_landunit .and. need_glacier_initialization) then
!$OMP PARALLEL DO PRIVATE (nc, bounds_clump)
do nc = 1, nclumps
call get_clump_bounds(nc, bounds_clump)
call glc2lnd_inst%update_glc2lnd_non_topo( &
bounds = bounds_clump, &
glc_behavior = glc_behavior)
call dynSubgrid_wrapup_weight_changes(bounds_clump, glc_behavior)
end do
!$OMP END PARALLEL DO
end if
Ideally, though, we'd either (a) get glacier areas from GLC -> CLM in initialization, or (b) (relevant to this bug report) interpolate these glacier areas in init_interp. If we did (b) then I could probably remove the check for is_interpolated_start in the above code.
Bill Sacks < sacks > - 2016-11-04 13:50:25 -0600
However, regarding comment 7 (now https://github.com/ESCOMP/ctsm/issues/76#issuecomment-352208992): It's still possible that we'd want to avoid doing the state / flux adjustments in the first time step after init_interp, because it may be the case that we often still have a large (fictitious) change in glacier area in that first time step. This would need more thought.
Regarding https://github.com/ESCOMP/ctsm/issues/76#issuecomment-352208997 -- see https://github.com/ESCOMP/ctsm/issues/340 . Summary: I plan to change that logic to avoid doing the dynamic landunit adjustment fluxes in the first time step of any run.
Fixing #346 will partially address this issue. But there will still be a need for something more general that would handle interpolating to a different grid: Note that #346 proposes to find a point at the exact same grid cell. I thought about generalizing it to find the closest gridcell and copying everything (including subgrid areas) from there, but that won't work in general, because the target output point may not contain all of the landunits/columns/patches from the closest input point, so areas won't sum to 1. So I'm currently thinking that we'll need separate modes of operation for different use cases - such as the use case of handling dynamic vegetation. Alternatively, maybe we need a fundamentally different scheme to handle this more generally. I'm not entirely sure....
Closing #211 as a duplicate of this, but there may be some useful info there.
Sam Levis < slevis > - 2010-03-18 11:33:09 -0600 Bugzilla Id: 1127 Bugzilla Depends: 1303, Bugzilla CC: andre, dlawren, erik, rfisher, sacks,
Interpinic has not worked for the old dgvm since probably before clm3.5. Interpinic has not been tested, yet, for CNDV. Therefore, we assume that it does not work.
With the clm4 we will supply spun up initial conditions for CNDV for year 2000, 2 degree simulations. Users will need to complete their own spin ups for other CNDV simulations.
We need to convey the above info in the clm4 user's guide.