Open phi-l-l-ip-thomas opened 8 months ago
Looks like you PETSc does not have SLEPc. But this error is due to missing some #ifdef LIBMESH_HAVE_SLEPC
in EigenProblem
of MOOSE. Tag @lindsayad
Hi @YaqiWang, thank you for the response! I just restarted the build from the beginning but this time I ran the script ./update_and_rebuild_petsc.sh
instead of loading the e4s version via spack. This succeeded in finding the correct xpmem
and pmi
libraries on my system. After building petsc, libmesh and WASP, I was also able to build MOOSE without running into the issue with compiling EigenProblem.C
that I experienced earlier.
However, now that MOOSE is built, when I run the tests, I immediately get a segmentation fault with the following error message:
moose/test> ./run_tests -j 6
<frozen importlib._bootstrap>:241: RuntimeWarning: compile time version 3.9 of module 'hit' does not match runtime version 3.11
Segmentation fault
Do you have an idea how I can diagnose+fix this error? Many thanks again!
It looks like you compiled HIT with a python version of 3.9 but when running the tests you were running with a python version of 3.11. Such an environment mismatch could cause a segmentation fault. One you could test is to try running a test directly with the generated test executable to see whether the problem is isolated to python
Hi @linsayad, thank you for the hint! I was initially perplexed by this message until I realized that the default Python had been updated by our system admin between the time that I built most of the dependencies and the time that I ran MOOSE itself. I can now build the code and dependencies without error. Cheers!
One more question: when running the tests, many fail due to mpiexec
not being directly available to users on our system -- instead we use Slurm's srun
with the Cray MPICH wrappers. Is there a convenient way in MOOSE to globally set the MPI command to call our wrapper of choice?
It looks like you can set a MOOSE_MPI_COMMAND
environment variable. Does srun
take a -n
argument?
Hi @lindsayad, yes, 'srun' can be called with a number of arguments (which can also be set as Slurm environment variables to avoid having to include them explicitly in the invocation), but the basic usage can be tailored down to an mpirun
/mpiexec
-like format:
srun -n <number-of-MPI-tasks> <executable>
In that case I think the MOOSE_MPI_COMMAND
should be the solution. Let us know if it works
Hi MOOSE development team,
I would like to build MOOSE from scratch to target a large HPC cluster. I am running into an issue with linking to dependencies when I attempt to install libmesh. The dependencies are all present on my system, but they are located in modules rather than in "default" locations. I have a workaround for the libmesh issue described below, but I am curious if there is a more elegant way to achieve this (and I am not sure that my workaround does not introduce a later problem -- see below -- when building MOOSE itself). Here are my build steps:
After these steps the following modules are loaded (note the versions of items 4 and 18 in the list below):
Next I set the environment variables:
PETSc is already installed on our system via a spack module, so I load this via:
Now when I attempt to build libmesh via:
, I receive the following errors:
I noticed in
~/moose/libmesh/build/Makefile
that the configure script found the following locations for the xpmem and pmi libraries:My workaround is to hack the Makefile to target where newer versions of these libraries reside on my system:
After the libmesh Makefile hack, I can successfully build libmesh:
Is there a way to direct the configure script of find the newer xpmem and pmi dependency libraries to avoid a need for the hack above?
Once libmesh is built, I continue by building WASP (successfully), followed by MOOSE itself:
My attempt to build MOOSE itself fails when trying to compile Eigenproblem.C; this gives the following error message:
If you have any advice on how to solve this, then I would be very grateful! Many thanks!