A top-level MPI_Init at XACC Initialize() is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a global MPI_Init at XACC::Initialize() could be problematic.
Hence, move it back within the scope of HPCVirt decorator.
Fixing an MPI_Finalize race condition issue when ExaTN MPI is present within the installation.
ExaTN has exatnInitializedMPI variable to determine if it should do the MPI_Finalize step, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors during ExaTN::Finalize() which could call MPI API's
A top-level
MPI_Init
at XACCInitialize()
is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a globalMPI_Init
atXACC::Initialize()
could be problematic. Hence, move it back within the scope ofHPCVirt
decorator.Fixing an
MPI_Finalize
race condition issue when ExaTN MPI is present within the installation. ExaTN hasexatnInitializedMPI
variable to determine if it should do theMPI_Finalize
step, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors duringExaTN::Finalize()
which could call MPI API's