Open Rufflewind opened 7 years ago
I agree with (a) but disagree with the implementation impact. MPI_User_function
is a type definition, not the identity of any function that matches its signature. The function is user code. MPI should not say anything about what the user can do with user code outside of the context of MPI calls.
What is absolutely essential - which you've captured well here - is that users cannot make any assumptions about which thread runs a callback used to implement a user-defined reduction. Furthermore, this function must be thread-safe when used with MPI_THREAD_MULTIPLE
. I'm not sure it is a good idea to encourage users to synchronize in this function. Ideally, callbacks cannot block, because these run may without preemptive scheduling.
I agree with (a) but disagree with the implementation impact.
MPI_User_function
is a type definition, not the identity of any function that matches its signature. The function is user code. MPI should not say anything about what the user can do with user code outside of the context of MPI calls.
I didn't mean to restrict the user. What I intended to say is:
The proposed rule (a) explicitly forbids the MPI implementation from calling the same
MPI_User_function
from multiple threads whenMPI_THREAD_MULTIPLE
is not active.
This simply a consequence of rule (a), in the “only if” direction.
(I do wonder, do all existing MPI implementations satisfy this rule, even when progression threads are enabled?)
I'm not sure it is a good idea to encourage users to synchronize in this function. Ideally, callbacks cannot block, because these run may without preemptive scheduling.
Okay, that part was also unclear. It should have been more like:
Hence, the user is only required to synchronize mutations of shared state if
MPI_THREAD_MULTIPLE
is enabled.
E.g. synchronization is not needed if the user simply reads from a global variable that is never written to during the collective operation.
I believe there is another thing that should be clarified, namely when the MPI implementation is permitted to call the MPI_User_function
. My intuitive guess would be:
A
MPI_User_function
may be called by the MPI implementation if and only if a collective operation involving the said function is in progress.
Does that seem right?
Then, the Impact on Users could be amended with:
If
MPI_THREAD_MULTIPLE
is not enabled, then the user may invoke their ownMPI_User_function
only if (i) no MPI collective call involving the associated operation is in progress, or (ii) the user's invocation is thread-safe with respect to potential invocations by all in-progress collective calls.
suggestion: add clarification - can be called from any thread at any time - do the strongest restriction from the user's point of view, which we can weaken later
Comment: should we add signal safety
ToDo: two proposals for BCN meeting
Isn't signal safety really really hard? Is any MPI implementation signal-safe today?
What's the rationale for invoking the user function from a signal handler?
Presumably signal handler can be used to drive progress instead of thread🤷♂️
The folks at the meeting already pushed back on signals comment above. The MPI standard explicitly says that implementations don't need to be signal safe. But @schulzm likes signals for some reason. 😃
I’m going to propose moving this to MPI 5.0. There’s more discussion to be had here. If someone objects and thinks we’ll be ready to read this soon, leave a comment and we can discuss bringing it back into MPI 4.1.
Disagree. This is a 4.1 minor fix.
Ok, but at the moment there isn’t anyone who really owns this. It’s nominally an issue for collectives so I’ll give it to @tonyskjellum for now, but even though it’s a “minor” fix, someone still has to do it. 😄
The folks at the meeting already pushed back on signals comment above. The MPI standard explicitly says that implementations don't need to be signal safe. But @schulzm likes signals for some reason. smiley
Arguing with the requirements imposed for implementations is the wrong perspective. The question should be: What is the most constrained context from that any implementation would possibly like to call this function? Would an implementation possibly invoke this function from a signal handler? Then the function needs to be async-signal-safe (man 7 signal-safety). Would an implementation possibly invoke this function from an interrupt handler? ...
At the same time, I don't see any good reason why the function should be allowed to have any side-effects other than working with the buffers passed as arguments (should this be spelled out?). This limitation would make the function signal safe and safe to be called from an interrupt handler. --> no static function variable, no access to global variables for counting the number of function invocations. --> Tools would need to take care and not instrument this function!
Back to the question of thread-safety: Given the no side-effects consideration, the only scenario, I can think of, where thread-safety would be relevant is that multiple threads call the function concurrently with the same buffer addresses. The process receives reduction messages from multiple other processes and performs the reduction operation for these messages on different threads. One thread could be the progress thread (or one of several progress threads) and the other thread could execute the MPI_Wait for the non-blocking reduction. This scenario can be relevant for all threading levels.
Is the MPI implementation responsible to avoid concurrent calls of the user function or should the user function be thread-safe? I think, even with MPI_THREAD_MULTIPLE, the implementation should take care that the user function is not called concurrently (per collective operation)! Requiring signal-safe and thread-safe at the same time would probably limit the user function to use atomic operations. I think, dropping the signal-safe property would constrain the implementation more than requiring the implementation to not call the user function concurrently.
If concurrent reduction calls from the application access the same buffer, the application causes data races anyway. So the mutual exclusive call to user function should only be relevant per collective operation.
Agree on no side-effects although this implies users cannot use, for example, OpenMP in such functions, since OpenMP has side effects. Do we want this restriction and do we think users will understand that OpenMP is excluded by "no side effects"?
Is the MPI implementation responsible to avoid concurrent calls of the user function or should the user function be thread-safe? I think, even with MPI_THREAD_MULTIPLE, the implementation should take care that the user function is not called concurrently (per collective operation)!
It should be valid for implementations to invoke the user function concurrently from multiple threads (user or progress) on different segments of the buffer used in an operation. Of course the implementation has to make sure that no two invocations can modify the same elements in the output buffer.
Regarding signal-safety:
For most applications, reductions are probably signal- and thread-safe because they only read from the input buffer and modify the output buffer. However, should we exclude more complex operators that may require memory management? Or what if the reduction operator invokes a third-party library that gives no guarantees about its signal-safety? Or printf
-debugging? I don't think we should prohibit that, esp since that is a corner case that doesn't seem to be relevant for most implementations and is easily avoided in implementations that do use signals (e.g., by deferring to a context outside of a signal handler).
As for side effects more broadly (which would include printf
), we shouldn't prohibit them but explicitly state that user functions can be called from any thread at any time between start and completion of any operation using the user operator, including on threads that are not under the control of the application. If their OpenMP implementation can handle that the user is free to use OpenMP. That is not MPI's business. And we shouldn't talk about thread-private variables either, none of our business.
The first post stated that multiple threads may only invoke the user function concurrently if MPI_THREAD_MULTIPLE
was requested. That is a reversal of the meaning of the flag ("Multiple threads may call MPI, with no restrictions."). It strictly only concerns down calls. Here we're talking about up-calls so that flag does not apply.
Having said all of that, if we want to give users control over the execution context of the user operator we should introduce a new function that accepts an info key (MPI_Op_create_with_info
) and info keys that restrain the invocation to a single thread or user-controlled threads only. I'm just not sure it's worth the effort...
This wasn't read at the December 2022 meeting. The last opportunity for MPI 4.1 is to have it ready at the March 2022 meeting (and it needs to "pass" the reading).
@schulzm / @tonyskjellum Are you (or is someone else) planning to push this forward or should we move it out of the plan for MPI 4.1?
Problem
The MPI Standard does not explicitly declare the threading guarantees of
MPI_User_function
when used in anMPI_Op
. Specifically,(A) Is
MPI_User_function
required to be thread-safe? Under what circumstances is the user required to use synchronization such as mutexes when mutating shared data?(B) On which thread(s) can
MPI_User_function
be called? Under what circumstances can the user reliably use thread-local data, if at all?Proposal
Based on responses of Jeff Hammond, William Gropp, and Marc Perache, the following resolution is proposed:
(a)
MPI_User_function
must be thread-safe if and only ifMPI_THREAD_MULTIPLE
is enabled. Hence, the user is only required to synchronize ifMPI_THREAD_MULTIPLE
is enabled.(b) The thread on which
MPI_User_function
is invoked is implementation-defined. Hence, the use of thread-local data is unportable. (Informative note: Depending on whether the MPI implementation supports progression threads, theMPI_User_function
may be executed on either the thread that called the collective operation or the progression thread.)Changes to the Text
(Waiting for review of this proposal first.)
Impact on Implementations
The proposed rule (a) explicitly forbids calls of the same
MPI_User_function
from multiple threads whenMPI_THREAD_MULTIPLE
is not active.Impact on Users
The clarification should dispel any uncertainty regarding the threading guarantees of
MPI_User_function
.References
Mailing list discussion: https://lists.mpi-forum.org/pipermail/mpi-forum/2017-May/006576.html