It turns out that calling MPI_Init from main rather than the TPS::Tps object is not as simple.
A key issue is that mfem::MPI_Session is a singleton class hard-coded to call MPI_Init.
That is, if we want parla to handle the initialization of mpi then we must get rid of mfem::MPI_Session from everywhere in the tps code (Note: mfem has already deprecated this class).
Before we do that, I'd like to better understand the use of tps::MPI_Groups are we actually solving different physics on different set of processors or are we simply use one single communicator for everything?
It turns out that calling
MPI_Init
frommain
rather than theTPS::Tps
object is not as simple.A key issue is that
mfem::MPI_Session
is a singleton class hard-coded to callMPI_Init
.That is, if we want
parla
to handle the initialization ofmpi
then we must get rid ofmfem::MPI_Session
from everywhere in thetps
code (Note:mfem
has already deprecated this class).Before we do that, I'd like to better understand the use of
tps::MPI_Groups
are we actually solving different physics on different set of processors or are we simply use one single communicator for everything?