E3SM-Project / e3sm-unified

A metapackage for a unified anaconda environment for analyzing results from the Energy Exascale Earth System Model (E3SM).
BSD 3-Clause "New" or "Revised" License
8 stars 8 forks source link

MPI_Init error running ilamb on LCRC #95

Closed chengzhuzhang closed 3 years ago

chengzhuzhang commented 3 years ago

I'm trying to setup an ilamb run on LCRC. But a test run resulted an MPI_init error as follows:

ilamb-run --config /home/ac.zhang40/ILAMB/src/ILAMB/data/cmip.cfg --model_root /lcrc/group/e3sm/ac.zhang40/ilamb_test_data/ --regions global bona

--------------------------------------------------------------------------
PMI2_Init failed to intialize.  Return code: 14
--------------------------------------------------------------------------
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:

  version 16.05 or later: you can use SLURM's PMIx support. This
  requires that you configure and build SLURM --with-pmix.

  Versions earlier than 16.05: you must use either SLURM's PMI-1 or
  PMI-2 support. SLURM builds PMI-1 by default, or you can manually
  install PMI-2. You must then build Open MPI using --with-pmi pointing
  to the SLURM PMI library location.

Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[chr-0369:1658739] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
xylar commented 3 years ago

Hmm, @chengzhuzhang, this seems way outside my wheelhouse. Is there someone from the E3SM team we could bring in for some help? I'm happy to change the build as needed but I don't know where to begin.

xylar commented 3 years ago

Just to be sure, this was run on a compute node, not a login node?

chengzhuzhang commented 3 years ago

I should clarify that this ilamb run is from e3sm-unified on compute nodes.

chengzhuzhang commented 3 years ago

There is not a ilamb-run executable in e3sm-unified on a login node. I'm wondering maybe @minxu74 could help take a look. To reproduce, get a compute node: srun -N 1 -t 01:00:00 --pty bash

source /lcrc/soft/climate/e3sm-unified/load_latest_e3sm_unified_chrysalis.sh
export ILAMB_ROOT=/lcrc/group/e3sm/ac.zhang40/ilamb_data
ilamb-run --config /home/ac.zhang40/ILAMB/src/ILAMB/data/cmip.cfg --model_root /lcrc/group/e3sm/ac.zhang40/ilamb_test_data/ --regions global
minxu74 commented 3 years ago

@chengzhuzhang could you try the following command to see if it works? srun -n 1 ilamb-run --config /home/ac.zhang40/ILAMB/src/ILAMB/data/cmip.cfg --model_root /lcrc/group/e3sm/ac.zhang40/ilamb_test_data/ --regions global bona

If it does not work, we may have to use system mpi4py, instead of the one installed by conda.

I do not have an account on LCRC machines, otherwise, I can look into it and try the above command.

xylar commented 3 years ago

Thanks @minxu74. Using system mpi4py might be a possibility.

chengzhuzhang commented 3 years ago

@xylar and @minxu74 Thank you for taking a look! When I try it today, it worked. Must be some one time glitch yesterday.

xylar commented 3 years ago

Okay, well that's a little disconcerting but hopefully it won't happen again. If it does, let's try to find out more. Using system mpi4py, if there is such a thing, might be an option.