Closed maddyscientist closed 3 years ago
MPI-3 support has only recently appeared in an MPICH release candidate, and it doesn't look like OpenMPI has any official support yet, so I don't think we can rely on it.
A placeholder for myself here. MPI3 supports non-blocking collectives, I'm not sure if any Krylov solvers have been adapted to use such a concept yet, but if so this could help with scaling.
Just sat in a talk by DK Panda on one-sided communication in MVAPICH2. This will be coming in an upcoming release and reduces latency down to 7us for peer-2-peer memory accesses (down from 10us). Definite proof point as to why we should consider this (perhaps as an alternative MPI backend, maintaining the current two-sided back end).
I just noticed that MPICH, MVAPICH and Cray MPI (the last two of which are based on the first) all now claim support for MPI 3. Whether or not OpenMPI supports MPI 3, I think this is safe to know consider implementing MPI 3 features.
This has been implemented by the Tsukuba / Tokyo group, though I have yet to get the source code.
Guess this has been superseded by NVSHMEM?
Indeed. Closing.
We should achieve better strong scaling if we use one-sided MPI communication instead of the current two-sided approach. This has shown to improve strong scaling on a variety of machines.
The only thing to discuss is whether to use the current one-sided interface that is possible with MPI 2, or to use MPI 3 which updates and expands. I guess this is a question of compatibility: is there any reason not to target MPI 3?