Open julesghub opened 10 months ago
The appropriate solution is likely to from mpi4py import MPI
before everything in underworld3/__init__.py
. Need to test that.
Here is the explanation. MPI code cannot be used with threads unless you promise to somehow coordinate. MPI_THREAD_MULTIPLE is one such promise, and it seems like Scotch requires this if you use threads. This is an option to MPI_Init(). Before calling PetscInitialize(), you can set PETSC_MPI_THREAD_REQUIRED = MPI_THREAD_MULTIPLE, to make this happen, or you can call MPI_Init() yourself, which is the solution above.
Thanks for the info @knepley! I'll stick to the above solution for now and see how it goes.
@julesghub - I may have broken this issue again in development as I was trying to get back the ability to read command line arguments. We might need to check !
roger that, I'll investigate today.
I still had trouble with this recently - can we revisit when the petsc 3.21.0 workflow is in place, @julesghub ?
By default, in our code, this is triggered from the conda-forge petsc. This and the ordering that allows command line arguments to be read seem to be a merry-go-round of cyclic regression.
Using conda in parallel models have this error along with the usual PETSC ERROR msg.
ERROR: SCOTCH_dgraphInit: Scotch compiled with SCOTCH_PTHREAD and program not launched with MPI_THREAD_MULTIPLE
The work around is to put
from mpi4py import MPI
before the
import petsc4py
orimport underworld
in your input file.