Closed mratsim closed 4 years ago
See: https://www.usenix.org/legacy/publications/compsystems/1990/sum_ruane.pdf
Any operating system kernel has some form of process synchronization, allowing a process to wait for a particular condition. The traditional choice for UNIX systems, the event-wait mechanism, leads to race conditions on multiprocessors. This problem was initially solved in Amdahl's UTS multiprocessing kernel by replacing the event-wait mechanism with Dijkstra semaphores. The kernel, however, became noticeably more complicated and less reliable when based on semaphores. This has led us to develop a race-free multiprocessor event-wait mechanism with some novel properties. A few common synchronization techniques have emerged, the most complex of which was verified correct with the supertrace protocol validation system spin. A new scheduling approach with per-CPU run queues reduces the number of unnecessary context switches due to awakening all waiting processes. The overall approach is claimed to be simple, efficient, and reliable.
After modeling the Backoff mechanism in a formal language and runtime formal verification by model checking, I believe I've narrowed down the deadlock.
There https://github.com/mratsim/weave/blob/b4c5a37568236a665fc5bdff2cd426a33324e2a4/weave/channels/event_notifiers.nim#L113-L120 the parent worker loads its child worker state in register and exits if its Busy or ShouldWakeup. But while this happens, the child can send its intent to sleep https://github.com/mratsim/weave/blob/b4c5a37568236a665fc5bdff2cd426a33324e2a4/weave/channels/event_notifiers.nim#L88-L95 And then follow up with sleeping https://github.com/mratsim/weave/blob/b4c5a37568236a665fc5bdff2cd426a33324e2a4/weave/channels/event_notifiers.nim#L97-L106 The parent thread then sends its signal to shutdown but it's never received.
Formal specification in TLA+
------------------- MODULE event_notifiers -----------------------
(*
Formal specification of the event_notifiers datastructure.
It allows a single consumer to be put to sleep and being woken up
by multiple producers so that the consumer is able to consume the incoming messages (steal requests).
It combines a prepare phase "intendToSleep" with commit phase "wait".
In between the consumer sends a message to its parent that it is going to sleep.
The commit phase is aborted when any producers signal an incoming message.
There should be no deadlock, i.e. an incoming message being signaled but the consumer stays sleeping
as in the runtime the main thread may be the only one awake and wouldn't be able to awaken its children
in that case.
*)
EXTENDS Integers, TLC, Sequences, FiniteSets
CONSTANTS NumThreads, ConsumerTID
ASSUME NumThreads > 1
ASSUME ConsumerTID > 0
ASSUME ConsumerTID < NumThreads
MaxID == NumThreads-1
ParentTID == ConsumerTID \div 2
producers == (0..MaxID) \ {ConsumerTID, ParentTID}
(* PlusCal options (-termination) *)
(* --algorithm event_notifier
variables
consumerState = "Busy";
signaled = FALSE; \* Simulate a condition variable
msgToParent = "None"; \* Simulate a message to parent. I.e. an opportunity for thread interleaving
macro intendToSleep() begin
consumerState := "IntendToSleep"
end macro
\* Everything in a macro happens atomically
macro atomicCompareExchange(result, current, expected, newVal) begin
if current = expected then
current := newVal;
result := TRUE;
else
result := FALSE;
end if;
end macro;
\* Consumer wait until it is signaled to wakeup
procedure wait()
variables casSuccess = FALSE;
begin
WCAS1: atomicCompareExchange(casSuccess, consumerState, "IntendToSleep", "Parked");
if casSuccess then
W2: while consumerState = "Parked" do
\* The while loop protects against spurious wakeups
WCV3: await signaled;
signaled := FALSE;
end while;
end if;
W4: assert consumerState \in {"Parked", "ShouldWakeup"};
W5: consumerState := "Busy";
W8: return;
end procedure;
\* Notify the potentially waiting consumer that it should wake up
procedure notify()
variables localConsumerState = "N/A"; \* local view of the consumer state
begin
N1: localConsumerState := consumerState;
N2: if localConsumerState \in {"Busy", "ShouldWakeup"} then
N3: return;
end if;
N4: consumerState := "ShouldWakeup";
N5: while TRUE do
NSIG6: signaled := TRUE;
N7: if consumerState /= "Busy" then
N8: skip;
else
N9: return;
end if;
end while;
end procedure;
procedure mayRequestWork()
begin MaySteal:
either
\* Sometimes you have enough work
NoSteal: skip;
or
\* Sometimes you don't and you steal
Steal: call notify();
end either;
ReqRET: return;
end procedure
procedure mayShareWork()
\* A parent can also share work with a
\* a child that sent it "Waiting"
begin MayShare:
either
\* sometimes the parent doesn't have work
NoWork: skip;
or
\* Sometimes it has some
Share0: if msgToParent = "Waiting" then
Share1: call notify(); \* wakeup the child
Share2: msgToParent := "None"; \* dequeue the child steal request
end if;
end either;
ShareRET: return;
end procedure;
\* Not fair because they might never steal
process producer \in producers
begin Coworkers:
call mayRequestWork();
end process;
\* a parent will always run at least the termination
fair process parent = ParentTID
\* The order of work sharing and work stealing is arbitrary
begin ParentWork:
either
PMayRW0: call mayRequestWork();
PMaySW0: call mayShareWork();
or
PMaySW1: call mayShareWork();
PMayRW1: call mayRequestWork();
end either;
\* But it will for sure tell the consumer to terminate at one point
Terminate: call notify();
end process;
process consumer = ConsumerTID
begin ConsumerWork:
either
\* if we have work we work on it
FoundWork: skip;
or
\* we signal our intent to sleep, tell our parent and then sleep
Sleeping0: intendToSleep();
Sleeping1: msgToParent := "Waiting";
Sleeping2: call wait();
end either;
end process;
end algorithm; *)
Error trace (keep an eye on the consumerState and localConsumerState)
And it was not it, or it was hiding another one. Unfortunately it disappear when adding full debug but when adding just debugTermination and looking into 8 happenings, the stacktrace is always the same:
Worker 1 parks Worker 0 enter the barrier Worker 0 receives the message Deadlock
Note that rarely this doesn't lead to a deadlock. This combined with the formal verification in #54 suggest that the deadlock is at the system level, probably some ordering issue between sending a steal request and receiving the waiting message.
For example, between trySteal and declineAll (decline all processes the children wait status): https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/runtime.nim#L121-L129
Okay so first of all it's not a deadlock it's a livelock.
Beginning of investigation, here is the stacktrace sampling after a minute of livelock.
Note: those are assumptions to get started with, and test various hypotheses:
shareWork
which normally wakeup and calls child worker should be triggered there. https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L333-L355
We need to know more about that part. Since shareWork
doesn't appear at all in stacktraces, either it has been inlined, or the call is cheap because the worksharingRequest queue is empty.
This is the only place that can wakeup the child worker in that loop. However that doesn't also explain why the main thread doesn't work on tasks or detect termination.It tries to dispatchTasks to the thief https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L224-L246
We know from the last branch that takeTasks
failed to find a task. How come?
runTask
was run and no tasks are left
dispatchTasks
calls decline and declineOwn https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L166-L179 https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L125-L164. From the if condition and the fact that termination is not detected we know that either the request is not in Stealing
state, or the master thread didn't flag the worker as waiting yet. We are in the first case. This is proven by the call to recvProxy
which receives steal requests on behalf of the idling worker
Now we need to know when recvProxy
is called within this sync/barrier loop and also where does the main thread as the opportunity to tag runtimeIsQuiescent
or change to Stealing
status. And it's in nextTask() in while recv(req)
https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/scheduler.nim#L133-L156
In summary, here is the livelock as I understand it:
Working
, how? why? That can only happen in forceFuture: https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/scheduler.nim#L301-L317sync(Weave)
barriernextTask
and will be livelocked happily ever after: https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/runtime.nim#L89-L103nextTask
https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/scheduler.nim#L133-L156
The first stealRequest received should be the "Waiting" request of the child, which is consistent with the interleave Worker 1 sends WAITING request -> Worker 0 enters barrier -> Worker 0 receives WAITING request. The master thread flags its child as waiting.Working
so in declineOwn
it's recirculated in the same state: https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L125-L164Now questions:
Stealing
: https://github.com/mratsim/weave/blob/a06469293a8c61e85d53389e073dcfce8689ed7a/weave/victims.nim#L125-L145Disclaimer: this analysis is not backed by anything but my VTune stacktraces, the code and the unreliable screenshot from the previous post (unreliable because we can't rely on stdout print from interleave thread execution). I may have missed something. In short, I have no log or timestamped call sequences.
Fixed in #55 now there is the glibc stuff from #56 left ...
Seems like with 2 threads on nqueens 11 we have a reliable deadlock :/