Closed shijian0702 closed 1 year ago
Hello Jian, this is normal. It's making oasis remapping files (rmp*). After doing this once, you can store them in a pool directory (e.g. `/p/project/chhb19/shi4/input/oasis/cy43r3/TCO319-HR/${nprocfesom}/rmp`) and link them in for all subsequent runs. Be aware that you need a separate set of rmp_ files if you change the number of fesom cores.
You can also send the link to me and @pgierz who will add them to the default pool dir.
AWI-ESM2 is much lower resolution so the process of making these remapping files is faster.
If you go to much higher resolution still, you can follow: https://awi-cm3-documentation.readthedocs.io/en/latest/how_to.html#generate-oasis3mct-remapping-weights-for-large-grids-offline-and-mpi-omp-parallel for a faster but more work intensive solution.
Moin,
As far as I know, other climate models, like awi-esm2, have no similar issue.
This happens in AWIESM-2 as well, you probably just do not notice it because the "extra" time being used is considerably shorter than in the high-res AWICM3 case since it needs to calculate far fewer re-gridding weights for the typical AWIESM-2 resolution (normally T63 plus CORE2)
@JanStreffing: if we want to store these things in the pool, that is in principle no problem, but we should think about a strategy to ensure that the regrid weights fit correctly to the employed atmosphere/ocean grids. Maybe some kind of checksum? That solution will need some brainstorming though....
I just saw that your runscript has lresume=true for oasis. Does this work for you on the first run? I would have thought that for an initial run you need to set this to false, so that it does not try to restart oasis and goes into LEG=0 mode.
This issue has been inactive for the last 365 days. It will now be marked as stale and closed after 30 days of further inactivity. Please add a comment to reset this automatic closing of this issue or close it if solved.
Describe the problem you are facing
The awicm3-v3.1 always spends much time to start the first run, dealing with the partial restart files of fesom: over 30 minutes for high-resolution (Tco319-BOLD) and over 20 minutes for low resolution (TCO95-CORE2). For the second run, it just took a few seconds to read the raw restart files. As far as I know, other climate models, like awi-esm2, have no similar issue. I'm not sure whether the problem is from awicm3 or esm_tools. Is it normal or something's wrong with my setup?
Runscrip and other relevant files
log file of the first run: pi_awicm3_tco95_core2_awicm3_compute_18500101-18591231_1074979.log
log file of the second run: pi_awicm3_tco95_core2_awicm3_compute_18600101-18691231_1076863.log
System (please complete the following information):
Actually, this problem appears in almost every awicm3 version/esm_tools/hpcs.