Closed PhilMiller closed 3 months ago
This change was inspired by NWM work on integrating code from ESMF that also invariably called MPI_Finalize
whether we wanted it to or not.
@danishyo: how would this change impact PDAF?
Thx @PhilMiller; we understand your reasoning. I just wanted to make sure it works for another ESMF app.
I don't use it in PDAF, but there is an interface in schism_bmi.F90 (parallel_finalize)
@platipodium: can u plz check whether it's necessary to include parallel_finalize in schism_bmi.F90? Not a big deal, but with Phil's change, it won't do anything when the communicator is inherited.
I'll go ahead and accept Phil's PR. Thx
Thanks for the incredibly quick review and merge on this @josephzhang8
@danishyo What's the schism_bmi.F90
you mentioned? Is there another BMI implementation for SCHISM besides the one that Jason developed?
schism_bmi.F90 was written by @platipodium for NUOPC/UFS.
Where does that implementation live? I'd like to compare it with what we have in development.
For the benefit of library users of SCHISM, it should only call
MPI_FInalize
if it 'owns' the MPI state. If the client code passed SCHISM a communicator, then that client code owns the process's MPI. In that case, SCHISM should not finalize MPI, which may corrupt the client's state, causing a crash or other unexpected behavior near the end of the run.