pymeasure / leco-protocol

Design notes for a generic communication protocol to control experiments and measurement hardware
https://leco-laboratory-experiment-control-protocol.readthedocs.io
MIT License
6 stars 3 forks source link

Multiple Coordinators #22

Open bklebel opened 1 year ago

bklebel commented 1 year ago

Since it comes up again and again, let's collect the notes on the possible ways how to use multiple Coordinators (either in one system or across systems) here.

BenediktBurger commented 1 year ago

So in the end, this issue resolves around the transport layer of the control protocol? At least the question how we get a message from some sending Component to some receiving Component is tightly bound to the question of one (or several Coordinators). Especially the Node concept adds the requirement of several Coordinators.

May I rename this issue to talk about the transport layer of the control protocol or do you think it is an extra issue?

BenediktBurger commented 1 year ago

Here my ideas regarding the transport layer of the control protocol:

Basic ideas

Open questions:

How do Coordinators connect to each other?

How do we achieve the routing of messages?

An example of the Coordinator Connector (This is my current test implementation):

In this system the Coordinators do not know, that they are tricked by the Connector and names just have to be unique for one Coordinator. You need, however, an additional Connector and names get changed throughout the routing path.

bilderbuchi commented 1 year ago
  • Each component has a unique name, which can be expressed as an (ASCII) string.

let's use "ID" instead of "name" -- a name to me implies something expressive like "Temperature logger", which is not necessarily unique, nor really compact (for the protocol). I think it's OK if a Component has a "name" (e.g. "Keithley2000 PSU from pymeasure") and an "ID" (some random string, or something composed in another way)

or one similar connection to its node's Coordinator (queue or something)

Just to clarify this, "its node" can also operate in DTM (in which case that would also use a DEALER socket), a queue etc. is only used in LTM.

How do Coordinators connect to each other?

IMO, this one, the others seem uncessarily complicated:

How do we achieve the routing of messages?

  • Each Coordinator knows (through heartbeats or sent messages), the addresses of its connected Components and their names. Therefore a single Coordinator + N Components works easily.

Agreed with this. Then, if we can make the Coordinator-Coordinator connection a many-to-many connection (like a PUB/SUB), we could just fully interconnect all Coordinators (I don't expect more than ~3-5 for now), and if a Coordinator does not know the recipient of a message, he just publishes that message. All Coordinators check the message, and if they know the recipient from their list, the send it on (and maybe send an ACK of sorts back).

I also like the namespaced names. We could do this: Every Coordinator needs to know its components, it can compile and distribute that list (+ later updates), e.g. via a "routing" message. Then other Coordinators have that information, so a Component can easily request the list of all Components in the network (like an address book)! I would say that the Coordinators participate in the routing thing, and basically lstrip or prepend their own name to the component ID when they receive from or send a message into the wider network, respectively.

bilderbuchi commented 1 year ago

I would like to avoid an additional component if we can. Also, I would caution that we don't reinvent a routing/discovery protocol (without many useful details and pitfall avoidance, probably), but reuse one if possible.

bilderbuchi commented 1 year ago

Quick sketch. Let's assume this structure:

flowchart TB
    subgraph node1
    CO1-->A
    CO1-->B
    end
    subgraph node2
    CO2-->C
    CO2-->D
    end

Address book exchange:

sequenceDiagram
    CO1->>CO2: I know CO1.A, CO.B
    CO2->>CO1: I know CO2.C, CO2.D 

Local routing:

sequenceDiagram
    A->>CO1: sender: A, recipient: B
    CO1->>B: sender: A, recipient: B

Inter-coordinator routing:

sequenceDiagram
    A->>CO1: sender: A, recipient: CO2.C
    Note over CO1: "I know that guy!"
    CO1->>CO2: sender: CO1.A, recipient: CO2.C
    CO2->>C: sender: CO1.A, recipient: C
    Note over C: "C now knows the full address of A"
BenediktBurger commented 1 year ago

in DTM (in which case that would also use a DEALER socket), a queue etc. is only used in LTM.

I understood a node as using LTM. If a node does not use LTM, the node does not need an internal Coordinator, as all Components can connect to an external Coordinator.

The Coordinators need one Dealer socket for each Coordinator Coordinator connection, as the Dealer sends messages to any connected peer (it does not know which one), while the Router cannot initiate a conversation (it needs to know the addresses).

Inter-coordinator routing:

I think you missed to name the sender "CO1.A" at some point on the path (probably between the Coordinators). Similarly the recipient should be (between CO2 and C) only "C".

I'd say the "home Coordinator" adds its own name space at outgoing messages and strips it from incoming messages. So the second message (in your example) would be "sender 'CO1.A', recipient 'CO2.C'" and the third one "sender 'CO1.A', recipient 'C'".

Regarding routing:

We could have a pub sub proxy (very reliable, almost no code for us), where the Coordinators exchange all Component connects and disconnects (for fast discovery) and regularly the list of their connected components plus their own address (host and port), such that a Coordinator may create a connection to the other one.

That way all coordinators know all active components (from the lists + updates) and do not have to ask the other Coordinators, whether they know "C", for example.

Edit: I thought more about the problem. A recently started Coordinator may announce its presence via that channel and the already started Coordinators connect to the new one. Also Coordinator heartbeats could happen via that channel.

One (minor) question, whether all Coordinators use a Dealer port for all other Coordinators (so all outgoing messages go through Dealer) or only one of both Coordinators use the dealer port (I'd say, both ways are similar in complexity and it could be decided later).

bilderbuchi commented 1 year ago

I understood a node as using LTM. If a node does not use LTM, the node does not need an internal Coordinator, as all Components can connect to an external Coordinator.

Yes, but not necessarily. Inside a Node you have the option LTM or DTM, outside only DTM. Yes, a Node-local Coordinator is not necessary if a Node uses DTM, but it might still be desirable (speed, latency,...) as the other Coordinator might be on another device or otherwise far away.

The Coordinators need one Dealer socket for each Coordinator Coordinator connection, as the Dealer sends messages to any connected peer (it does not know which one), while the Router cannot initiate a conversation (it needs to know the addresses).

Ah, so 4 Coordinators would each need 3 DEALER ports?

I think you missed

Indeed, thanks!

I'd say the "home Coordinator" adds its own name space at outgoing messages and strips it from incoming messages.

Agreed, that's what I intended, too.

Your routing remarks sound sensible. I can't really help/assist with zmq intricacies, unfortunately.

BenediktBurger commented 1 year ago

Ah, so 4 Coordinators would each need 3 DEALER ports?

Either each needs 3 DEALER ports (symmetric connection), or they need in average 1.5 DEALER ports, if one of two Coordinators uses its ROUTER port to communicate with the other one.

Just a side note on that "Coordinator coordination system" via PUB-SUB: We can use the same proxy and protocol defined in #3 . In fact, we would just have three identical proxy servers (on different ports) for three uses: Coordinator coordination, data exchange, log messages. The proxy server itself (in python you just call zmq.proxy(socket1, socket2) and are done, see my example in issue 3) is written by the zmq people and therefore very stable and reliable.

Your routing remarks sound sensible. I can't really help/assist with zmq intricacies, unfortunately.

In the summer I read the full zmq guide, before I implemented my system. Now this information and experience is very helpful.

bilderbuchi commented 1 year ago

Let's keep the focus on the original question -- how do deal with multiple Coordinators. The "protocol transport layer" is (probably at this point gonna be) zmq and how we set it up, the coordinator coordination (omg :-P) is just one aspect of this.

bilderbuchi commented 1 year ago

In the summer I read the full zmq guide, before I implemented my system. Now this information and experience is very helpful.

Yeah, I already noticed you two are quite experienced :D I'm perfectly happy with letting you and @bklebel hash out the details of the ports/connections/proxy design, and will mainly interject when I can't follow or my common sense sensor trips. I'll try to focus a bit on the Actor/Driver/Processor interaction; I got a solid gut feeling from my previous project.

BenediktBurger commented 1 year ago

I summarize:

Whether a port connects or binds is marked bold.

bklebel commented 1 year ago

You are just so fast, I don't really keep up anymore... I quite fully agree with @bmoneke's last summary, i.e. that we should do it as described. It is much better if Coordinators communicate among themselves in a distributed way, and only the information about where which Coordinator is reachable is in a central place, so that if the central place were to fail, the rest can still work fine, so even if the data exchange and some Observer fails, all Actors can be reached by the Coordinator system. And as we don't currently consider a system which extends across firewalls and port-forwards in routers, it is fine (and important) that our communication is then among the Coordinators N-to-N and not multi-hop, I think.

Did I get it right, the Coordinators use the (missing) heartbeat from the Actors to detect "disconnects", right? What I think is missing in the summary, is that if a new Coordinator publishes its presence, the others should not just ask for a list of Components which are connected to this new Coordinator, but they should also send over the list of Components which they themselves are connected to, otherwise the new Coordinator does not know about any other Components which might be connected to the system.

EDIT: found LMT and DMT abbreviations

BenediktBurger commented 1 year ago

Did I get it right, the Coordinators use the (missing) heartbeat from the Actors to detect "disconnects", right?

That, or because a Component explicitly "signs out". As that is not part of the connection among Coordinators, I did not mention it.

What I think is missing in the summary, is

Thanks for that missing part. I edited my message.

bklebel commented 1 year ago

That, or because a Component explicitly "signs out".

Ah, okay, sure, we can always have a "sign out" message in a shutdown method, that sounds quite sensible, did not think about it.

bilderbuchi commented 1 year ago

The summary sounds great! One thing that could be added re: the "address book":

Could send address book updates, too. Could be useful for a Director to know which Actors are available (e.g. to populate a GUI).

BenediktBurger commented 1 year ago

Yes, we should add the possibility to get the whole address book, but that is not an issue of the communication between Coordinators. EDIT: I opened a new issue for Coordinators and placed that idea there: #28

Regarding stripping / appending the name space, I started a new discussion in #27, but that does not change the basic principles.

BenediktBurger commented 1 year ago

Just an idea: we could regularly, but rarely (every half an hour or so) request a current Components list in order to update the local list, just in case some information got lost.

bilderbuchi commented 1 year ago

That could be part of a regular "resync" exchange -- that will maybe not remain the only thing to be synched (clocks, e.g.?).

BenediktBurger commented 1 year ago

@bklebel had the idea to do the "Coordinator's announcement" via the control protocol instead of the Data protocol (Pub-sub)

Whatever the way is, we need one central server, whose address is known (be it a normal Coordinator or a XPUB-XSUB Proxy), such that Coordinators may connect to the know address and get the information about other Coordinators.

Advantages:

Disadvantages

BenediktBurger commented 1 year ago

Regarding using the control protocol for address book updates:

Another advantage is, that the Coordinators are self sufficient (they do not need another communication channel).

Implementation (now that every Coordinator has a Dealer to each other), they can send a message via each Dealer channel.

They same works for newly started Coordinators:

  1. You start Co1 with the address (host and port) of any other Coordinator Co2.
  2. The recently started Co1 tells the already started Coordinator Co2 that it is available under its address.
  3. Co2 tells all its buddy Coordinators, that this new Coordinator Co1 is available.
  4. All Coordinators connect with a new dealer port to the new Coordinator Co1, which connects to them in return with its own Dealer port.
  5. In this connection phase, they exchange address book information.

This setup is great, as we do not need any "central coordinator". Any Coordinator serves as entry point to the Network. It is easy to serup: start one Coordinator. Later you start another one and give it the address of the first one. Then you can choose...

For reliability, we could give a list of addresses, such that it tries to contact one after the other until it finds a running Coordinator, so the network can rebuild itself, if the Coordinators restart (as OS services).

bilderbuchi commented 1 year ago

I like this approach of self-sufficient Coordinators without a "central" one. We could probably look to mesh network algorithms how to efficiently deal with updates/resyncs after Coordinators have disappeared/reappeared.

For reliability, we could give a list of addresses, such that it tries to contact one after the other until it finds a running Coordinator, so the network can rebuild itself, if the Coordinators restart (as OS services).

These "addresses" would in fact be of the ROUTER sockets of all previously known Coordinators, right? I guess the service/process can store that somewhere on disk as it should not change too often. Then on restart that info is already available. Maybe same with a Coordinator's list of connected Components?

bklebel commented 1 year ago

These "addresses" would in fact be of the ROUTER sockets of all previously known Coordinators, right?

Yes, exactly. I also like the way @bmoneke has put it without a central Control Coordinator, but with fully distributed ones, where a user only needs to know the IP address of one Control Coordinator, and everything else is "discussed" amongst the network of Control Coordinators.

store that [Control Coordinator ROUTER socket addresses] somewhere on disk as it should not change too often

I think so too, although I am not sure how to put that into the protocol itself - "MUST store on disk" (but we do not say where/how)? In regard to the list of connected Components, I am not so sure, although this would stay the same for quite long times, the Components will always try to talk to the Control Coordinator anyways, so the CCoordinator will notice them soon enough, especially with heartbeats. In the end, this particular question is more about reliability and the implementation, I think.

bilderbuchi commented 1 year ago

We could prescribe just that implementations must "persist" that info without saying how.

We could also make that optional - it's a convenience feature, and we could offer a path for manual discovery of other connectors.

Re: Component connections: consider that after connector restart all incoming connection senders will be unknown to the connector, and will thus be refused (unknown sender).

BenediktBurger commented 1 year ago

We could also make that optional - it's a convenience feature, and we could offer a path for manual discovery of other connectors.

That's my stance: We do not need it for proper routing. It could be done externally (starting the Coordinator with a set of command line parameters etc) or in hard written in the start up script.

We could require, that a Coordinator shall accept a list of addresses to connect to at startup as a parameter.

These "addresses" would in fact be of the ROUTER sockets of all previously known Coordinators, right?

Right, as these addresses are IP addresses and port numbers. We could add a "store configuration to disk command", which could be also useful for some Actors etc.

Maybe same with a Coordinator's list of connected Components?

The list of connected Components is useless, as the Zmq connection identity will be different at reconnect. And you do not know, whether an old client will come back.

Component connections: consider that after connector restart all incoming connection senders will be unknown to the connector, and will thus be refused (unknown sender).

Yes. As we require to Sign in, Components have to sign in again after Coordinator restart. All their communication partners have to connect and sign in as well, such that you should wait at least the heartbeat interval after a Coordinator restart, until you try to send messages (then you can expect, that all Components signed in again, as their refused heartbeat triggered a new sign in).

bilderbuchi commented 1 year ago

The list of connected Components is useless, as the Zmq connection identity will be different at reconnect.

Ah, good to know!

So, the logic would/could be that

BenediktBurger commented 1 year ago

Exactly. You should wait with resending a bit, however, such that the other side has time to sign in as well.

Am 6. Februar 2023 08:39:27 MEZ schrieb Christoph Buchner @.***>:

The list of connected Components is useless, as the Zmq connection identity will be different at reconnect.

Ah, good to know! So, the logic would/could be that the fresh Coordinator send an "you're unknown to me" response, the Component will SIGN_IN, and then can re-send its message.

-- Reply to this email directly or view it on GitHub: https://github.com/pymeasure/leco-protocol/issues/22#issuecomment-1418639994 You are receiving this because you were mentioned.

Message ID: @.***>