underworldcode / underworld3

https://underworldcode.github.io/underworld3/
18 stars 10 forks source link

conda parallel run error #140

Open julesghub opened 10 months ago

julesghub commented 10 months ago

Using conda in parallel models have this error along with the usual PETSC ERROR msg. ERROR: SCOTCH_dgraphInit: Scotch compiled with SCOTCH_PTHREAD and program not launched with MPI_THREAD_MULTIPLE

The work around is to put from mpi4py import MPI

before the import petsc4py or import underworld in your input file.

julesghub commented 10 months ago

The appropriate solution is likely to from mpi4py import MPI before everything in underworld3/__init__.py. Need to test that.

knepley commented 10 months ago

Here is the explanation. MPI code cannot be used with threads unless you promise to somehow coordinate. MPI_THREAD_MULTIPLE is one such promise, and it seems like Scotch requires this if you use threads. This is an option to MPI_Init(). Before calling PetscInitialize(), you can set PETSC_MPI_THREAD_REQUIRED = MPI_THREAD_MULTIPLE, to make this happen, or you can call MPI_Init() yourself, which is the solution above.

julesghub commented 10 months ago

Thanks for the info @knepley! I'll stick to the above solution for now and see how it goes.

lmoresi commented 9 months ago

@julesghub - I may have broken this issue again in development as I was trying to get back the ability to read command line arguments. We might need to check !

julesghub commented 9 months ago

roger that, I'll investigate today.

lmoresi commented 7 months ago

I still had trouble with this recently - can we revisit when the petsc 3.21.0 workflow is in place, @julesghub ?

lmoresi commented 2 months ago

By default, in our code, this is triggered from the conda-forge petsc. This and the ordering that allows command line arguments to be read seem to be a merry-go-round of cyclic regression.