Closed haeferer closed 7 years ago
If you use eventProvider for connection between different server instances, you shouldn't filter those intercom events. We didn't test it with different forks yet, but In theory, they are created for one reason: to be able to get information, call functions, etc from devices, connected to a different instance when you do request to another.
To make interServer Communication i would suggest to use always a Messaging Server. For Interprocess Communication i think this is ok. But to make this working (and not Spreading All answers to all Server/Processes possible needing this event) you need something like an "incomming queue" to direct to answer to a server/process specific messagequeue.
I currently build an solution for this, (based on your eventprovider) to make a "headless" Spark-Server where you
Seems like the same implementation you have done with "EventProvider/EventPublisher" but using a Messagequeue.
Using this you should be able to bind different SparkServer together (behind a LoadBalancer, balancing particles based on there IP)
I don't like the idea of having a single node for directing all the messages. You should be able to scale horizontally without caring about which one of your servers is the message director.
There are plenty of ways to determine whether or not the events should be re-dispatched without creating a single point of failure.
I understand your i idea. But the "Messagedirector" (RabbitMQ in this case) has also the Position of a Queue for messages during updates or failures of nodes to ReQueue, or Store Messages until the receiver is available again. I dont think this can be done without a central MessageBus(component).
RabbitMQ IS! a singlePoint of Failure, but therefor is build based on Erlang and extrem stable. Look here for more infos about the concepts behind erlang : https://www.fastcompany.com/3026758/inside-erlang-the-rare-programming-language-behind-whatsapps-success
It doesn't matter to me if it's written in Erlang. You're talking about handling the message queue with a single "headless" spark-server that handles the messaging.
We have designed this server so it can scale horizontally without having a master node. You should be able to connect all the nodes to RabbitMQ and it should handle the message queue. I'm assuming that RabbitMQ will be scaled horizontally as well.
Ok, No, you got me wrong. Let me explain:
There are more sparkservers, all the same. Particles(for instance) loadbalanced based on there src-ip. All sparkserver send events to the same queues. Based on last event, a registry (redis) is filled with latest "managementsparkserver".
So if a action is issued to the central action queue on rabbit, a specialized worker gets the action, resolves the managing sparkserver (using redis ) and renenqueues the action to the correct sparkserver.
Answerprocessing and Eventprocessing is always done by workers.
The headlessSparkserver is a sparkserver acting as described
From workerView, the whole systems looks like one eventqueues and one action queue to send actions. Anything else is managed behind. If a sparkserver crashes the loadbalancer will send new connections to another sparkserver. The particle reconnects and the "online" event will automatically update the Redis db for the new Sparkserver. Timeouts on redis ensure messages from / actions to this server are reenqueued
In the next week i will build a sample of this Szenario including a picture .
My rabbit solution looks like (and works the same as ) your Eventemitter Solution between SparkServer and SparkSparkProtocol. At the end my headless Sparkserver is an EventProvider + handling incoming actions from a rabbit queue and transform them to publishAndWaitForAnswer
Done
Hi,
I've played a lot with the EventProvider from SparcProtocol.
The Eventprovider subscribes '*', and in result automaticaly the server will also receive all intercom Events, like "spark-server/get_attributes/request/9bce852d-28b5-49d4-a45a-6bd36987b394" etc.
I would suggest to
I will make a Fork and a PR for these changes