Event-Structures / event-struct

Mechanized Theory of Event Structures
MIT License
16 stars 1 forks source link

Synchronization Algebra #59

Open eupp opened 3 years ago

eupp commented 3 years ago

Synchronization algebra is a algebraic structure defined on the set of labels. It determines pairs of events from parallel processes which should synchronize on parallel composition.

In [2] the S.A. is defined as tuple <L, \star, \bot, *> where:

With the following axioms.

The paper [2] gives an example of S.A.

\star a !a t \bot
\star \star a !a t \bot
a a \bot t \bot \bot
!a !a t \bot \bot \bot
t t \bot \bot \bot \bot
\bot \bot \bot \bot \bot \bot

That is, the algebra consists of two complementary labels a and !a, which can synchronize into label t.

Consider two singleton event structures: E1 ::= { e1:a } and E2 ::= { e2:!a } (with no conflicts and trivial causality, i.e. \emptyset |- e1 and \emptyset |- e2). Then their parallel composition (according to [1]) will be E1 || E2 ::= { (e1, \star):a, (\star, e2):!a, (e1, e2):t } with trivial causality and conflict between events (e1, \star) & (e1, e2); and conflict between events (\star, e2) & (e1, e2). In other words, an event (e1, e2) does not depend neither on (e1, \star) nor (\star, e2), and it is in conflict (cannot occur simultaneously) with both of them.

While the label \bot is used to indicate that two labels cannot synchronize (i.e. it encodes partiality of the synchronization operation), the label \star is used as dummy "idle" synchronization. That is, since we have e1:a * e2:!a = (e1,e2):t, it is convinient to assume that e1:a * \star = (e1, \star):a, i.e. e1 "synchonizes" with distinguished fictional event \star.

Next, the paper [2] defines the "divides" relation (let's denote it as >>).

l1 >> l2 ::= \exists l3. l1 * l3 = l2.

This relation is proven to be transitive.

Now, giving this classical construction [1,2], we'll have to adjust it for our needs.

The most significant change that I want to propose is to get rid of commutativity, i.e. to make synchronization operation asymmetric. Thus we will be able to capture asymmetry between writes/reads to shared memory, or, more generally, an asymmetry between producers/consumers.

Let's consider the labels that we have in the relaxed shared memory concurrency semantics.

Label ::= W(x,v) | R(x,v)

That is, we have writes and reads, annotated by the memory location and value written/read.

It would be convenient to assume that there is a distinguished "undefined" value \bot. We will use it to give the denotations to individual read instructions.

For example, consider two programs P1 ::= write(x, 1) and P2 ::= read(x). Their denotations would be equal to the following event structures: E1 ::= { e1:W(x,1) } and E2 ::= { e2:R(x,\bot) }. Next, we want the denotation of their parallel composition P1 || P2 to be E1 || E2 ::= { (e1,\star):W(x,1), (\star,e2):R(x,\bot), (e1,e2):R(x,1) }. In order to achieve that, we need to define the synchronization algebra as follows:

1) W(x,a) * R(x,\bot) = R(x,a); 2) l1 * l2 = \bot otherwise.

Then we'll have that W(x,a) >> R(x,a) as expected.

However, an important departure from the classical theory, is that in E1 || E2 we also want to add causality dependency between the write and the synchronized read. That is, we want (e1,e2):R(x,1) to causally depend on (e1,\star):W(x,1), and at the same time the event (e1,e2):R(x,1) should be in conflict with (\star,e2):R(x,\bot).

That is why I propose to get rid of commutativity in synchronization algebra. In other words, the W(x,a) * R(x,\bot) = R(x,a), but R(x,\bot) * W(x,a) = \bot. My assumption is that this asymmetry will help us to capture the causality between the write event (producer) and the read event (consumer).

Unresolved Questions

1) How to better encode the partial synchronization operation?

[1] Winskel G. Event structures //Advanced Course on Petri Nets. – Springer, Berlin, Heidelberg, 1986. – С. 325-392. link

[2] Winskel G. Event structure semantics for CCS and related languages //International Colloquium on Automata, Languages, and Programming. – Springer, Berlin, Heidelberg, 1982. – С. 561-576. link

clayrat commented 3 years ago

It seems to me SAs are pretty much conical PCMs, e.g. see the presentation in https://core.ac.uk/download/pdf/82434444.pdf

eupp commented 3 years ago

It seems to me SAs are pretty much conical PCMs, e.g. see the presentation in https://core.ac.uk/download/pdf/82434444.pdf

Well, except the commutativity part : ) I've given some intuition on why we try to get rid of commutativity in the original post. I can try to give further explanations here after the weekend, or we can discuss it in person.

clayrat commented 3 years ago

It sounds like you might want to keep some commutativity, arriving at some sort of a hybrid between PCM and a trace monoid (partial partially commutative monoid?)

eupp commented 3 years ago

Currently, I'm not sure that commutativity applies to our setting. So let me try to explain why : )

In a relaxed memory theory, a common approach to define the semantics as a set of so-called execution graphs. Those are the generalization of traces in the sense that a trace can be viewed as a total order, while the execution graph is a partial order. Related concepts/keywords --- pomsets, Mazurkiewicz traces. I believe those three (execution graphs, pomsets, Mazurkiewicz traces) are basically the same thing, with minor differences.

Each execution graph consists of set of events E, labeling function E -> L and two binary relations --- program order po and reads-from rf. Labels are usually either read or writes, i.e. R(x,v) and W(x,v), where x is a memory location, and v is a value read/written. Program order determines the intra-thread ordering between the events, while the reads-from determines the "dataflow" from write events to read events. More details on this can be found, e.g. here, or basically in any paper from the group of Viktor Vafeadis, e.g. RC11 :)

The transitive closure of po and rf can be seen as a "causality" relation ca := (po+rf)^+. There is a couple of subtle points here. First, this causality order is different from the stronger "happens-before" order. Second, this definition doesn't work for some weaker models (e.g ARMv8, Promising, etc). But for a wide class of models (e.g. SC, TSO, RA, RC11), it matches our intuition (to some extent).

Now to the business! The problem is that the reads-from relation is essentially not symmetric. It captures the flow of data from write events (producers) to read events (consumers). Now, our idea is to derive an "axiomatization" of reads-from relation from the synchronization algebra defined on labels. This is why we don't want a commutative monoid.

Back to the example in the beginning. Suppose we have two processes P1 ::= write(x, 1) and P2 ::= read(x). An axiomatic semantics (e.g. RC11) will assign two execution graphs to the program P1 || P2: < { E:L := e1:W(x,1), e2:R(x,\bot) } , po := \empty, rf := \empty > < { E:L := e1:W(x,1), e2:R(x,1) } , po := \empty, rf := { (e1, e2) } >

How we can reconstruct it following the compositional semantics? We start with two denotations. E1 := < { E:L := e1:W(x,1) } , po := \empty, rf := \empty > E2 := < { E:L := e2:R(x,\bot) } , po := \empty, rf := \empty >.

Next, we take their composition E1 || E2. It's defined w.r.t synchronization algebra. In our S.A. we have W(x,1) * R(x,\bot) = R(x,1). Now, it is important that these two do not commute, i.e. R(x,\bot) * W(x,1) = 0 != R(x,1).
Essentially, the anticommutativity defines the "direction" of reads-from relation in the parallel product. As a result, we will obtain

E := < { E:L := (e1,\star):W(x,1), (\star,e2):R(x,\bot), (e1, e2):R(x,1) }  , 
          po := \empty, 
          rf := { ((e1, \star), (e1, e2)) } >