Closed InterstellarPenguin closed 1 day ago
Thanks for writing @InterstellarPenguin. Could you also post the gchp*.log
and the allPEs.log
files?
You can also schedule extra debug information in the logging.yml file as described in our ReadTheDocs:
@InterstellarPenguin, please note that we do not recommend using GCHP with coarse resolution meteorology. I do not think using the 2x2.5 fields is causing the problem, but it will cause less accurate results.
If C24 works but C48 does not, I recommend trying to run with more cores. Also try explicitly requesting all memory per node with SBATCH.
Thanks @lizziel @yantosca , I've checked allPEs.log, there are bugs related to some extdata in the image below:
And then i came across a solution in another issue #429, it says that I should rewrite the setting in extdata.rc file like this.
BTW, in the GCHP.rc, I'm not sure about 'GCHPchem_INTERNAL_CHECKPOINT_FILE: Restarts/gcchem_internal_checkpoin' is correct.
The simulation crashed sometimes with error about 'netcdf4' in reading checkpoint files (unluckily i delete the case, so I no longer have the logs, sry about that), but when I add the '.nc4' extension behand 'GCHPchem_INTERNAL_CHECKPOINT_FILE: Restarts/gcchem_internal_checkpoin' just like 'GCHPchem_INTERNAL_CHECKPOINT_FILE: Restarts/gcchem_internal_checkpoin.nc4', or turn the switch 'WRITE_RESTART_BY_OSERVER' in 'GCHP.rc', it completes successfully. Is that a bug, or was it my mistake?
If a previous run generated Restarts/gcchem_internal_checkpoint and it was not renamed or deleted by the run script then when you try to run again the model will crash. Do you still have this file after your run crashes? What run script are you using? The run scripts are designed to avoid this issue so if the one you are using it is not catching this then we would definitely like to know.
Generally the O-server is only needed by certain systems when you run with greater than 1000 cores. Try running again with the O-server off, with gcchem_internal_checkpoint deleted if it is present, and with GCHPchem_INTERNAL_CHECKPOINT_FILE set back to the orginal setting.
Please note that we do not recommend using the carbon simulation with version 14.4. Fixes are coming in 14.5.1. See github issues: https://github.com/geoschem/GCHP/issues/440 https://github.com/geoschem/GCHP/issues/437 https://github.com/geoschem/geos-chem/issues/2463
Thanks@lizziel ! The run script I used was not from the 'GCHP' dir, that's why the crash happened. I appreciate your reminder about my error and the bug in the carbon simulation!
In the history.rc, I've noticed that there are two different type of CO2 output: 1.EmisCO2 2.ProdCO2fromCO, I wonder if these are different methods for calculating CO2.
The second question is that the GCHP, unlike GCClassic, does not use HEMCO to read files, for this reason, if I'm going to change the land and ocean carbon flux input data, is it necessary to ensure that ExtData.rc aligns with HEMCO.config.rc, or can I just rewrite the inventory in ExtData.rc?
In the HEMCO.config.rc, I noticed that 'GC_restart' is set to false. I’m curious if GCHP can automatically recognize the restart file. The simulation runs well with this switch off, but it crashes when I turn it on (if-restart-2019.log). What should I do to configure a spin-up without a restart file?
Hi @InterstellarPenguin, please open a new issue for questions outside the scope of the original MPI error. Thanks!
Your name
Linyang Guo
Your affiliation
UCAS
What happened? What did you expect to happen?
Hi, all! I'm running a c48 simultion, but it crashed with the following error:
I'm not sure whether the error is related to the settings or MPI.
What are the steps to reproduce the bug?
setCommonSettings.sh:
gchp.job:
ExtData.rc:
Please attach any relevant configuration and log files.
setCommonSettings.txt.txt ExtData.txt
btw, the MetDir has been changed to my own ExtData, and when I run the GCHP with C24 instead of C48, it completes successfully.
What GCHP version were you using?
14.4.3
What environment were you running GCHP on?
Local cluster
What compiler and version were you using?
ifort 2021.3.0
What MPI library and version were you using?
Intel MPI 2021.3.0
Will you be addressing this bug yourself?
Yes
Additional information
No response