Open slfritchie opened 9 years ago
:+1:
We've talked about it, are you talking about something like my Teleport library?
http://vagabond.github.io/programming/2015/03/31/chasing-distributed-erlang/
This was always a long term plan of mine but never had a chance to implement it.
Sent from my iPhone
On Aug 6, 2015, at 20:15, Scott Lystig Fritchie notifications@github.com wrote:
Howdy, I'm wondering if y'all have any plans or thinking about an option to use something other than distributed Erlang message passing for communication between plumtree actors. Yes/no/maybe/Scott-should-make-a-PR?
— Reply to this email directly or view it on GitHub.
I think something like Andrew's Teleport library could be very interesting, but would probably need some love first. Did you have something else in mind, @slfritchie ?
Hi, sorry I didn't return to this quickly.
I hadn't had a particular thing in mind. My personal pet project, Machi, doesn't use disterl at all, as a "what if we need LUDICROUS SPEED?" defensive tactic. The Machi chain manager does its own internal gossip-like thing, since it has a mechanism that (in theory) allows all nodes to disseminate up/down status info eventually. In practice, it's a probabilistic thing that (for the moment) is less likely to converge as the # of participants grows ... and that is suboptimal. (If the network partition is complete/total, then convergence is very rapid. But it's very nice to be able to handle any kind of partition in a sane manner.)
An option that I have been exploring in very small slices of time is using Plumtree to disseminate individual views of up/down. If a network partition is not complete/total/"completely separate islands", then each Machi participant would eventually see those up/down updates as propagated through Plumtree. Then the chain manager would be able to make decisions that would cause quick convergence (unlike today's current worst-case).
I need to do an exercise first: is it less work to make the gossipy part of today's implementation less stupid/more robust, or is it less work to foist the gossip task to Plumtree and modify Plumtree to communicate via not-disterl.
@tsantero I am wondering if you're still interested in that change. If so, what would be the best way for you to handle a custom transport. I mean to setup it.
Right now I can see the following:
ok = plumtree:start_gossip(GossipName, Transport, Options),
plumtree:join(GossipName, Node),
with one broadcast by server.
Possibly the application could bridge 2 gossip network. Thoughts?
I can't speak for the peer service included in this repo but the actual plumtree bits (whether you are looking at this repo or riak core) purposefully moved all message sending to a function that could later be updated to support different transports based on configuration. There shouldn't be (m)any instance of !
or gen_server:cast
(unless they are local casts) just laying around in the code, so it should be as simple as updating plumtree_broadcast:send/2
and making the receiving side covert the messages back to gen_server:cast
(I think the previously mentioned library by @Vagabond does this?).
I'm not sure where it stands now, but the early version of that library didn't support causal ordering between processes, because of how disconnections can occur. The transparency of converting the calls on either side is trivial -- but ensuring the causality that the Erlang provides for message delivery is much harder.
That is, my thinking from about a year ago though, so that could be way off.
There shouldn't be anything in the plumtree bits that relies on ordering from Erlang. The protocol (as it wa s extended in Riak) was designed to support dropped/duplicated/re-ordered messages and the implementation should adhere to that constraint. I would consider it a bug in the implementation if something broke while switching away from Erlang's message passing.
Howdy, I'm wondering if y'all have any plans or thinking about an option to use something other than distributed Erlang message passing for communication between plumtree actors. Yes/no/maybe/Scott-should-make-a-PR?