Closed billsacks closed 3 years ago
Hi Bill S.,
Thanks for these results and sharing them with us. One thing that I notice in the CISM output is that you seem to be using an initial file that uses the ocean block decomposition (which speeds up CISM by not computing anything over the ocean-defined blocks); you can see that by the block patterns around Antarctica in the CISM results. What is weird is that anything that is not part of the ocean blocks that are "discarded" for the computation seems to be behaving just fine and everything that is computed over the "ocean blocks" is causing problems. That said, I don't know why this would be problematic at all. Could you send me a link to your case?
Thanks
Gunter
On Wed, Aug 25, 2021 at 5:45 PM Bill Sacks @.***> wrote:
I noticed that, in some of my Antarctica tests in CESM, SMB accumulates in areas that are in the ocean. I have noticed this in an I compset test that uses very coarse land resolution (10 deg x 15 deg: /glade/scratch/sacks/ERS_Vnuopc_Ly3.f10_f10_ais8_mg37.I1850Clm50SpGa.cheyenne_intel.20210811_223534_2a9bfb). It's possible that this problem won't arise in more realistic production resolutions (it doesn't appear in a T compset where the land forcing is at f09_g17 resolution; I haven't tried an I compset at that 1-degree resolution), but I still think that this is a real issue that should be fixed.
Here is a figure showing SMB from CISM's history file from the above test:
[image: 20210825_154540_6UE7AI] https://user-images.githubusercontent.com/6266741/130877735-f40a61dd-78b5-4d92-bd9e-d00477ce449b.png
This positive SMB in the periphery of the grid (which I think should be open ocean) leads to an accumulation of ice thickness in these grid cells.
Here is the corresponding SMB map from the coupler history file:
[image: 20210825_163521_wvZ5Rz] https://user-images.githubusercontent.com/6266741/130877818-dae7bb70-8048-4f9d-8ca0-ccb7b1b86f6c.png
From digging in to some of the coupling fields and from comparing the behavior with that in Greenland simulations, I think what's going on is:
- The icemask correctly starts as 0 in open ocean points in Antarctica.
- There is always potentially a non-zero SMB passed in areas outside of the ice sheet: CTSM generates SMB outside as well as inside the icemask, and assumes that CISM will discard any SMB sent outside the icemask.
- For Greenland runs, CISM appears to properly be discarding SMB sent outside the icemask. However, for Antarctica runs, there are some grid cells outside the icemask that accept this SMB and start growing ice.
- It appears that these problematic points start with usurf = 0. So it seems like the problem is that some usurf = 0 points are able to accept SMB in Antarctica, in contrast to Greenland. Interestingly, there is a swath of points in the Antarctica run that seems to zero out the SMB, but beyond that swath, it accepts SMB.
This is a problem not only because it leads to ice growth in the open ocean, but also because I think it would break conservation in a fully-coupled run: CTSM assumes that the icemask (actually icemask_coupled_fluxes, but in practice they are effectively the same) dictates where CISM is able to receive SMB. Conversely, it assumes that, if icemask_coupled_fluxes is 0, then CISM is not able to receive SMB there, and so CTSM should send any generated SMB directly to the ocean, via the snow capping flux. But, for system-wide conservation, that means that CISM needs to be consistent in this: If CISM says that icemask_coupled_fluxes is 0 (which is typically the case over open ocean), then it needs to discard any SMB that is sent there. I'll admit that this coupling is rather subtle and error-prone, and could probably use to be reworked (probably with the coupler/mediator handling some of this logic itself), but for now that's how things work.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/ESCOMP/CISM/issues/39, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6QV6AOWP2FC6XLEPFJC63T6V6DJANCNFSM5C2FG5ZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .
-- Gunter Leguy, Ph.D (he/him) Project Scientist National Center for Atmospheric Research cell: (575) 418 1021 desk: (303) 497 1790
@gunterl - interesting. I was attributing the blockiness to the coarse resolution land forcing (10° x 15°), but I hadn't looked into it carefully. Your explanation could explain why there is a ring of correctly-handled points in the ocean grid cells near Antarctica and the problems appear further afield.
The case is here: /glade/scratch/sacks/ERS_Vnuopc_Ly3.f10_f10_ais8_mg37.I1850Clm50SpGa.cheyenne_intel.20210811_223534_2a9bfb
@billsacks, Thanks for the detailed explanation and plots. Like @gunterl, I noticed that the SMB patterns seem connected to CISM's inactive blocks – the regions that are identified by CISM's ice_domain_mask as permanently ice-free, so that no processors need to be assigned there. We typically use this setting for Antarctica but not for Greenland.
My interpretation of the problem is that you're seeing a nonzero SMB in the inactive blocks. This might be an initialization issue. The coupler is passing a nonzero SMB over the ocean, and CISM fails to zero it out because it's not doing any local computations in those regions. Maybe we need to do something with a global array. These regions do exist at the global level, even though they aren't mapped to any local processor.
The inactive-block option is enabled by setting compute_blocks = 1 in the namelist. So a first step would be to set compute_blocks = 0 and see if this SMB issue goes away.
Then we can look for the best place in the code to add some new logic. I don't know where this would be, but I'm glad to take a look.
It looks like compute_blocks is already 0 in /glade/scratch/sacks/ERS_Vnuopc_Ly3.f10_f10_ais8_mg37.I1850Clm50SpGa.cheyenne_intel.20210811_223534_2a9bfb/run/cism.ais.config.
This is definitely something we should look at carefully when doing a run that has compute_blocks = 1, but that doesn't seem to be the issue here.
@billsacks – Interesting. I also see you're running Antarctica on only 72 processors, which isn't enough to generate this pattern of active and inactive blocks.
Nevertheless, I think this problem is related to the compute_blocks option, because the pattern is so similar to what I'm used to seeing in Antarctic 8km runs with this option turned on. Could you please point me to your CISM input file? Maybe this file has a bad 'topg' field from a previous run with compute_blocks = 1.
@billsacks – Thinking ahead a bit, we'll want to make sure that when we do set compute_blocks = 1, we write out topg with values that won't mislead the coupler on restart. I think we want to avoid topg = 0 in the inactive regions. In these regions, we may want to save the initial global topg field and always write out this field, instead of a field that is gathered from the active blocks and has incorrect values for the inactive blocks.
The init file is: /glade/p/cesmdata/cseg/inputdata/glc/cism/Antarctica/ISMIP6_Antarctica_8km.1950_init_c210613.nc
And you're right: here's the topg from that file (the brownish points are exactly 0):
Besides the obvious issue that topg should have reasonable values for the whole domain instead of having 0 values in these more distant ocean points, this makes me think that there may be a more subtle issue of how points with usrf exactly equal to 0 are treated.
For the sake of determining what points are inside the icemask, we use this function:
The assumption there (if I remember correctly) is that points with usrf <= 0 will throw away any SMB they are handed. But here we have points with usrf exactly equal to 0 that seem to be accumulating SMB. Is it possible that there is a check somewhere else in CISM that throws away SMB for points with usrf strictly less than 0, and that we should bring those two checks into sync so that points with usrf exactly 0 are treated consistently?
@Bill Sacks – Sorry, I should have looked at this file more closely. It's not an ideal input file. It seems to be a restart file that was repurposed as an input file, which is why topg is wrong. It also contains some fields that are needed only for exact restart and not for initialization.
In the near term, I think it will be enough to fix the topography. @Gunter Leguy, could you please make a file that has the correct bedmachine topg throughout the domain?
In the past, it wasn't easy to remove files from inputdata subdirectories. Is this still true? It would be nice to be able to experiment with inputs before we settle on a production version.
I'll follow up with a reply to your most recent post.
Yes I can make such a file. To be clear, do we want a file that does not have any patterns of ice_domain_mask, right for every field or simply for topg?
On Thu, Aug 26, 2021 at 4:35 PM William Lipscomb @.***> wrote:
@Bill https://github.com/Bill Sacks – Sorry, I should have looked at this file more closely. It's not an ideal input file. It seems to be a restart file that was repurposed as an input file, which is why topg is wrong. It also contains some fields that are needed only for exact restart and not for initialization.
In the near term, I think it will be enough to fix the topography. @gunter https://github.com/gunter Leguy, could you please make a file that has the correct bedmachine topg throughout the domain?
In the past, it wasn't easy to remove files from inputdata subdirectories. Is this still true? It would be nice to be able to experiment with inputs before we settle on a production version.
I'll follow up with a reply to your most recent post.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESCOMP/CISM/issues/39#issuecomment-906788366, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6QV6FFD2KDNP3IHUSHFDTT626TFANCNFSM5C2FG5ZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Gunter Leguy, Ph.D (he/him) Project Scientist National Center for Atmospheric Research cell: (575) 418 1021 desk: (303) 497 1790
In the past, it wasn't easy to remove files from inputdata subdirectories. Is this still true? It would be nice to be able to experiment with inputs before we settle on a production version.
Once you have added a file to the inputdata svn repo, it is effectively frozen. But you can experiment with files on glade to come up with something you're happy with, then we can just add the final version.
Ideally the new input file will use exactly the same grid as the old one – then we won't need to remake any grid / mesh files.
@billsacks – Yes, CISM may be doing the wrong thing where topg is exactly zero. For instance, here in glad_timestep:
! Set acab to zero for ocean cells (bed below sea level, no ice present)
where (GLIDE_IS_OCEAN(instance%model%geometry%thkmask))
instance%acab = 0.d0
endwhere
In general, I've been deprecating the old Glide masks in favor of Glissade masks, so I'm not happy to see GLIDE_IS_OCEAN here. However, we would get the same result using the Glissade ocean mask. Over in glide_masks.F90, we have this:
!Identify points where the ice is floating or where there is open ocean
where (topg - eus < (-rhoi/rhoo) * thck)
mask = ior(mask, GLIDE_MASK_OCEAN) ! GLIDE_MASK_OCEAN = 8
elsewhere
mask = ior(mask, GLIDE_MASK_LAND) ! GLIDE_MASK_LAND = 4
endwhere
Cells with topg = thck = 0.d0 will be identified as land, hence capable of receiving an SMB.
I'd like to (1) bring these two checks into agreement, and I'd prefer to keep the convention that topg = 0 implies land, not ocean. In addition, we need to (2) fix the topg output for compute_blocks = 1, and (3) produce a better input file. Maybe you could do (1), I could work on (2), and in due course @gunterl could take care of (3).
Going forward, let's experiment with input files on Glade. These will be on the same 8km grid as the file that's already in inputdata. We won't put any more files in inputdata without some thorough vetting.
I anticipate that a 4km grid will be the default for production runs.
@gunterl – For now, I think it will be enough to give @billsacks a file with the correct topography and ice thickness. The thickness is already OK, since we have thk = 0 in the inactive region. So just swap out the topography.
But we should start thinking about fields that should go in the final input file. We can start with the fields that would be in a standard ISMIP6 file before spin-up. To these we would add tempstag, powerlaw_c_inversion, and deltaT_basin from the end of the spin-up, since both these fields evolve during the spin-up. I can't think of any more that we need, assuming that we have in mind a forward run with spun-up temperature, C_p, and deltaT_basin.
To prepare the final version, we should repeat our spin-up with compute_blocks = 0, so we don't have the block structure anywhere in the input file.
This sounds like a good plan. For (1), though, I'm not clear on exactly what you envision to keep these checks in agreement. The easiest thing would be for me to change the conditional in is_in_active_grid
to check usrf >= 0
instead of usrf > 0
. Is that what you have in mind for now, or do you feel that I should (a) use an existing mask variable in place of that check (if so, which one), and/or (b) change the GLIDE_IS_OCEAN check in glad_timestep to instead use a mask in glissade (if so, again, which one?).
@billsacks – What I had in mind is just that you change the conditional in is_in_active_grid to check usrf >= 0. I'll take care of GLIDE_IS_OCEAN at a later stage, probably as part of an overall mask cleanup. It's OK for now since it agrees with the glissade mask ('ocean_mask') that would replace it.
Thinking some more about compute_blocks = 1: This is a useful feature for people who want to do multiple long spin-ups in TG mode, but the cost savings is in the noise for century-scale BG runs. For now, it will be easier to work with input files that are defined on the entire grid. Down the road, to support compute_blocks = 1, would it make sense to write special values in the inactive-block region to avoid confusion?
Another issue: It doesn't feel quite right to me to put input files with spun-up temperature and basal friction fields permanently in inputdata. If we tweak any inversion parameter, we get a different answer. There's an analogous issue with spun-up BGC fields in other components, including CTSM. Are multiple files with spun-up results put in inputdata, or just the basic information used for a cold start? For ice sheets, the basic data include ice thickness and topography, geothermal heat flux, and climatological thermal forcing. These data could serve as the starting point for many spin-ups and wouldn't need to be replaced so often.
@Bill Sacks @.> and @William Lipscomb @.> , I have added a new initial file with full bed topography here: /glade/p/cesmdata/cseg/inputdata/glc/cism/Antarctica/ ISMIP6_Antarctica_8km.1950_init_c210826.nc
Note that I use lossless compression on this file to reduce its size. Also, I noticed in Bill S. output file that the "artm" field also does not look quite right. I guess that is not too surprising as artm and smb are both handled similarly. Just thought I would mention it though.
On Thu, Aug 26, 2021 at 8:08 PM William Lipscomb @.***> wrote:
@billsacks https://github.com/billsacks – What I had in mind is just that you change the conditional in is_in_active_grid to check usrf >= 0. I'll take care of GLIDE_IS_OCEAN at a later stage, probably as part of an overall mask cleanup. It's OK for now since it agrees with the glissade mask ('ocean_mask') that would replace it.
Thinking some more about compute_blocks = 1: This is a useful feature for people who want to do multiple long spin-ups in TG mode, but the cost savings is in the noise for century-scale BG runs. For now, it will be easier to work with input files that are defined on the entire grid. Down the road, to support compute_blocks = 1, would it make sense to write special values in the inactive-block region to avoid confusion?
Another issue: It doesn't feel quite right to me to put input files with spun-up temperature and basal friction fields permanently in inputdata. If we tweak any inversion parameter, we get a different answer. There's an analogous issue with spun-up BGC fields in other components, including CTSM. Are multiple files with spun-up results put in inputdata, or just the basic information used for a cold start? For ice sheets, the basic data include ice thickness and topography, geothermal heat flux, and climatological thermal forcing. These data could serve as the starting point for many spin-ups and wouldn't need to be replaced so often.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESCOMP/CISM/issues/39#issuecomment-906869788, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6QV6D7BE6FA7YVIXHZ6UTT63XR5ANCNFSM5C2FG5ZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Gunter Leguy, Ph.D (he/him) Project Scientist National Center for Atmospheric Research cell: (575) 418 1021 desk: (303) 497 1790
@whlipscomb - Okay, thanks, I'll make that simple change shortly. Regarding your other questions:
Down the road, to support compute_blocks = 1, would it make sense to write special values in the inactive-block region to avoid confusion?
That makes sense to me.
Another issue: It doesn't feel quite right to me to put input files with spun-up temperature and basal friction fields permanently in inputdata. If we tweak any inversion parameter, we get a different answer. There's an analogous issue with spun-up BGC fields in other components, including CTSM. Are multiple files with spun-up results put in inputdata, or just the basic information used for a cold start? For ice sheets, the basic data include ice thickness and topography, geothermal heat flux, and climatological thermal forcing. These data could serve as the starting point for many spin-ups and wouldn't need to be replaced so often.
For CTSM, we put fully spun-up files in the inputdata repository, because this is what gets pointed to out-of-the-box so that people can run with a spun-up land and not need to do the expensive and somewhat tricky spinup themselves. These spun-up files are roughly 1 GB in size, with some of them a few GB (see /glade/p/cesmdata/cseg/inputdata/lnd/clm2/initdata_map). Even for a given version of the model, there are a number of different initial conditions files available (probably at least 10), for different years, atmospheric forcings, etc. My sense is that they are replaced every few years, when there is a new scientifically-supported version of the model. A few 10s of GBs is actually relatively small in the grand scheme of the inputdata repository: For example, generating all new surface datasets for CTSM generates a few TB of new data.
Following that example, I feel it would be very reasonable for you to include your current best estimate of inverted fields in the initial file, if you feel that's what users should use for their own simulations in general. But if this is something that you change frequently (say, multiple times a year), then it could make sense to update the out-of-the-box file – which would be the file in inputdata – less frequently, e.g., timed with releases of CISM.
@gunterl - thank you for adding the file. However, the format needs to be changed: Some of our supported systems have problems reading NetCDF4-formatted files, so we aren't even allowed to add NetCDF4-formatted files to the inputdata repository. I don't have experience with the built-in NetCDF compression: does this require NetCDF4 format? If not, you could simply convert this to a different format (in the past I have used ncks -5 FILENAME
to convert to an allowed format). However, unless you know for sure that compressed files can be read by CESM on all systems, can you please send an email to Jim Edwards (cc'ing me) describing how you are doing the compression and confirming that it will be okay to do this? (It may be relevant to add to him that CISM doesn't use PIO.)
As I discussed with @whlipscomb by phone this afternoon, it didn't work to change the conditional in is_in_active_grid
to just be usrf >= 0
instead of usrf > 0
: All ocean points have usrf == 0, so this change made the icemask equal to 1 everywhere, which is not what we want.
@whlipscomb had some thoughts about how to handle this, such as checking usrf > 0 .or. topg >= 0
, but since we want to be careful to get this right (and consistent with other CISM code) and since this is somewhat tangential to the main Antarctica issue here, we are going to defer this. I'll open a separate issue for this problem.
@billsacks and @gunterl – I just sent a detailed email. Here, I'd like to summarize my understanding of where we are and ask some questions.
Thanks for your thoughts on this @whlipscomb . For the work I'm doing now, all I really need is something that can run for a few years and give results that aren't complete garbage. So probably any of (1), (2a) or (2b) will work. If you feel like it's worth the time to go ahead with (2a) or (2b) now, I can wait for those. But if you have other higher priority things you'd like to be doing now, I think it would also work to use (1) for now and then return to this later: I know it means putting a temporary file in inputdata that we plan to replace, but this file isn't too large.
@billsacks, I suggest (2a). Earlier this week, I was thinking (1) because I didn't want to hold you up. But now I'm thinking that we've never run the model with compute_blocks = 0 after reading input fields from a run with compute_blocks = 1 (nor would we, in practice). This seems a bit dangerous – we might spend time tracking down errors that result from an artificial configuration. I can work with @gunterl on a robust cold-start input file.
@billsacks, To get you going quickly (I hope), please look at this directory: /glade/p/cesm/liwg_dev/lipscomb/cism_antarctica/stage_cism_spinup
I'm using an input file that @gunterl created for ISMIP6 Antarctic spin-ups: ISMIP6_Antarctica_8km.bedmachine2.nc
This config file is similar to what we used for ISMIP6, with a few updates to account for recent code changes: ismip6.config.init
To make sure things are working, I launched a 1000-year run using the cism_driver executable compiled from cism_main_2.01.004: 1e376e1e (tag: cism_main_2.01.004, main) Merge pull request #36 from ESCOMP/lipscomb/ais_coupled
And the output looks OK. I think this will give you what you need for testing, after tweaking the namelists. Please let me know if you have any questions.
@whlipscomb and @gunterl - This new file (ISMIP6_Antarctica_8km.bedmachine2.nc) has lat and lon values that differ by more than just roundoff from the file we had been using before (the files in /glade/p/cesmdata/cseg/inputdata/glc/cism/Antarctica). Can one of you help reconcile this? Ideally the new file would use the same grid as the original one to avoid needing to update the mesh file pointed to by cime, but if we need to update the mesh file then we can do that. I can't tell, though, whether the differences in lat & lon values are intentional or if (for example) this new file is meant to be on the exact same grid as the old ones and the lat and lon values are just incorrect on this new one.
Specifically, I'm seeing:
-57.8708610534668, -57.9235382080078, -57.9761734008789
on the old file, and -57.82823, -57.88089, -57.93351
on the new file.Could one of you please look into this and let me know how to proceed? The ideal solution for me would be if you determine that the new file is supposed to have the same latitude and longitude values as the old one, and then we can just copy the latitude and longitude values from the old one to the new one. But I realize that may not be the correct thing to do.
@billsacks, I will defer these questions to @gunterl. What you're calling the 'new' file was actually made some time ago for ISMIP6 simulations, and what you're calling the 'old' file may be a file for which Gunter computed lat and lon more recently. My guess would be that the 'old' values are the ones we want, but Gunter can confirm.
@whlipscomb - As I started to compare the config file you pointed me to with what comes out of the box, I remembered that this is tricky because there are quite a few intentional differences between what we use in CISM-in-CESM vs. these standalone configurations, which I spent a while going through a few months ago. So instead I dug up the config file you pointed me to a few months ago, which I used as the basis for the defaults in CISM-wrapper, and diffed this new config file with that one.
I am attaching the two files for reference.
ismip6.config.init_20210903.txt ismip6.config.init_20210602.txt
Potentially relevant differences are the following:
Can you please confirm that I should make all of the above changes?
@billsacks – Thanks for looking carefully at the new config file. Let me respond to each point:
In short: Yes, please make all the above changes. Thanks!
Thanks @whlipscomb - I'll make those changes soon. One more question about the inversion_ parameters: Is it safe to change the defaults for both gris & ais (which would be slightly easier), given that we use which_ho_cp_inversion and which_ho_bmlt_basininversion = 0 by default for gris? Or should we maintain the old default values for these inversion parameters for Greenland?
@billsacks – Yes, it's OK to change the defaults for both ice sheets. I haven't run Greenland with bmlt_basin inversion, but for Cp inversion the same parameters that work for the AIS should work for the GrIS.
Hi @billsacks, sorry for the delay with this. I am not sure what happened to the older file and its lat lon (my guess given the meta data is that they got interpolated from higher resolution? It's been a while). I added an updated file with the corrected lat-lon that coincide with the ones in the previous files. It is located here: /glade/p/cesmdata/cseg/inputdata/glc/cism/Antarctica/ ISMIP6_Antarctica_8km.init.c210908.nc Thanks Gunter
On Mon, Sep 6, 2021 at 11:09 AM William Lipscomb @.***> wrote:
@billsacks https://github.com/billsacks – Yes, it's OK to change the defaults for both ice sheets. I haven't run Greenland with bmlt_basin inversion, but for Cp inversion the same parameters that work for the AIS should work for the GrIS.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ESCOMP/CISM/issues/39#issuecomment-913776977, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB6QV6EV2SSTUPNH343MVRDUATYV5ANCNFSM5C2FG5ZQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
-- Gunter Leguy, Ph.D (he/him) Project Scientist National Center for Atmospheric Research cell: (575) 418 1021 desk: (303) 497 1790
Thanks for your recent input, @whlipscomb and @gunterl . I have started testing with the new init file and config settings.
Changes are here: https://github.com/ESCOMP/CISM-wrapper/pull/62
With the new Antarctica init file, the problem that started this issue is fixed. I will make a CISM-wrapper tag with this change shortly.
I noticed that, in some of my Antarctica tests in CESM, SMB accumulates in areas that are in the ocean. I have noticed this in an I compset test that uses very coarse land resolution (10 deg x 15 deg:
/glade/scratch/sacks/ERS_Vnuopc_Ly3.f10_f10_ais8_mg37.I1850Clm50SpGa.cheyenne_intel.20210811_223534_2a9bfb
). It's possible that this problem won't arise in more realistic production resolutions (it doesn't appear in a T compset where the land forcing is at f09_g17 resolution; I haven't tried an I compset at that 1-degree resolution), but I still think that this is a real issue that should be fixed.Here is a figure showing SMB from CISM's history file from the above test:
This positive SMB in the periphery of the grid (which I think should be open ocean) leads to an accumulation of ice thickness in these grid cells.
Here is the corresponding SMB map from the coupler history file:
From digging in to some of the coupling fields and from comparing the behavior with that in Greenland simulations, I think what's going on is:
This is a problem not only because it leads to ice growth in the open ocean, but also because I think it would break conservation in a fully-coupled run: CTSM assumes that the icemask (actually
icemask_coupled_fluxes
, but in practice they are effectively the same) dictates where CISM is able to receive SMB. Conversely, it assumes that, ificemask_coupled_fluxes
is 0, then CISM is not able to receive SMB there, and so CTSM should send any generated SMB directly to the ocean, via the snow capping flux. But, for system-wide conservation, that means that CISM needs to be consistent in this: If CISM says thaticemask_coupled_fluxes
is 0 (which is typically the case over open ocean), then it needs to discard any SMB that is sent there. I'll admit that this coupling is rather subtle and error-prone, and could probably use to be reworked (probably with the coupler/mediator handling some of this logic itself), but for now that's how things work.