mpi-forum / mpi-issues

Tickets for the MPI Forum
http://www.mpi-forum.org/
67 stars 8 forks source link

RMA: depreciate MPI_PROD and MPI_M**LOC from RMA #559

Open jeffhammond opened 2 years ago

jeffhammond commented 2 years ago

Problem

MPI_PROD is useless in all contexts, and no one uses it [1]. It is especially painful in RMA implementations with hardware support, since no network supports this operation for all datatypes.

MPI_M**LOC is useful in collectives, but it is unclear if there are any real use cases. Implementers hate supporting this.

[1] I looked. NWChem was the only open-source code other than MPI test suites using MPI_PROD, and it was using it to implement MPI_LAND. I removed MPI_PROD years ago (https://github.com/GlobalArrays/ga/issues/52).

Proposal

Deprecate support for MPI_PROD, MPI_MAXLOC and MPI_MINLOC from the RMA chapter.

Changes to the Text

TODO

Impact on Implementations

Implementations get easier to implement correctly and faster.

Impact on Users

It is unclear if anyone uses these features so there may only be a theoretical impact.

References and Pull Requests

jeffhammond commented 2 years ago

@hjelmn I recall you hate M**LOC in RMA the most :-)

@jedbrown Do you use MPI_M**LOC in RMA in PETSc StarForest? Is it important?

jeffhammond commented 2 years ago

We should deprecate in MPI 4.1 so we can delete in MPI 5.0. This is a relatively minor change.

devreal commented 2 years ago

I remember @mahermanns had a use case for M**LOC in Scalsca?

jedbrown commented 2 years ago

We use MPI_MAXLOC in PETSc unstructured mesh management (via the SF interface, which has an RMA implementation). We don't recommend using the RMA implementation because most workloads are slower that way and implementations have historically had bugs.

devreal commented 2 years ago

Notes from the WG meeting:

wesbland commented 1 year ago

This wasn't read at the December 2022 meeting. The last opportunity for MPI 4.1 is to have it ready at the March 2022 meeting (and it needs to "pass" the reading).

@devreal / @jdinan Are you (or is someone else) planning to push this forward or should we move it out of the plan for MPI 4.1?

jdinan commented 1 year ago

I'll defer to @devreal on this one. I'm fully booked with the HACC WG.