ecmwf-ifs / fiat

The Fortran IFS and Arpege Toolkit
Apache License 2.0
9 stars 25 forks source link

Incorrect usage of non blocking MPI collectives #17

Open lukasm91 opened 9 months ago

lukasm91 commented 9 months ago

We found an issue in the usage of collectives, for example:

https://github.com/ecmwf-ifs/fiat/blob/main/src/fiat/mpl/internal/mpl_alltoallv_mod.F90#L252

The variables IRECVDISPL and ISENDDISPL are local variables and do not outlive the end of the function. These variables must be valid until the collective completes, see https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node126.htm

Once initiated, all associated send buffers and buffers associated with input arguments (such as arrays of counts, displacements, or datatypes in the vector versions of the collectives) should not be modified, and all associated receive buffers should not be accessed, until the collective operation completes.

Other routines such as MPL_ALLGATHERV have the same issue.

In the latest HPC-X version, we get a segfault due to this. A quick workaround is to disable non blocking communication for collectives, and a proper fix is probably to make these displacements not ALLOCATABLE, but POINTER and require the optional input if we use non blocking communication.

wdeconinck commented 9 months ago

Thanks @lukasm91 for this report! So I understand now that MPI_IALLTOALLV for most MPI implementations is internally calling the blocking version MPI_ALLTOALLV, which is why this problem went under the radar for the past decades. It makes sense that the (optional) arguments KRECVDISPL and KSENDDISPL should stay alive as long as MPI_WAIT is not called. I would therefore suggest to abort if the optional arguments are not present (breaking change for IFS/Arpege). This will require changes in ifs-source to make sure this is indeed the case.

An alternative backward-compatible fix (perhaps temporarily, but these things stick) is to manually call the blocking MPI_ALLTOALLV when the arguments are missing, but that would just hide the problem and we'd all mistakenly think we're using the non-blocking version.

Two cents @ioanhadade @marsdeno ?

ioanhadade commented 9 months ago

Good catch. We need to work on a fix as we started using non-blocking collectives in some places and plan to use them more. I believe one was introduced (igather) recently in opdis.F90 and in most of the cases, we use local arrays from the calling method as arguments to MPL collective which means that if the wait for completion for the collective is not done in the same scope (e.g., calling routine) but somewhere later, the calling arguments are out of scope.

Thanks @lukasm91.

lukasm91 commented 3 weeks ago

FYI: Alexey from BSC also ran into this when running IFS. For the moment we just disable UCC (OMPI_MCA_coll_ucc_enable=0).

a-v-medvedev commented 3 weeks ago

Thanks @lukasm91 for highlighting this. I ran into this issue while working on DestinE/ClimateDT code to make it run properly on MareNostrum5 machine (we use fiat-1.2.0 there for now).

Out of my previous experience, I'd say that this wrong semantics issue for MPI_IAlltoallv() is quite severe: many MPI implementations move nowadays to more advanced algorithms of non-blocking collectives, so this will lead to random/occasional/persistent crashes (depending on how lucky you are) on many new machines and runtime environments.

I'd suggest considering to implement an urgent hotfix ASAP. I think I have no permission to create branches in this project, so instead of a pull request, I'm attaching here hotfix-alltoall-issue-github#17.patch.gz a variant of a hotfix patch. This is just an idea of a fix (even though I tested it in my environment recently) -- one can implement another type of a fix based on this or similar idea. Any comments and corrections are welcome. @wdeconinck

ioanhadade commented 3 weeks ago

@a-v-medvedev can you fork fiat, add your patch on top develop/master, and then raise a PR to develop?

wdeconinck commented 3 weeks ago

The hotfix in #29 (thanks @a-v-medvedev) is sufficient to urgently patch the crashing behaviour. A proper fix will internally be worked on with priority. This issue should not be closed until this proper fix is in place.