Closed xinz closed 6 years ago
@xinz This is a known issue (#374, https://github.com/quantum-elixir/quantum-core/issues/368). I didn't have the time at the moment to come up with a final solution.
I'm going to close the issue as a duplicate therefore.
If quantum elects one node as "master" by default, can we configure more nodes to be "master" for the availability of application?
This is not possible with the current design of the application. If we wanted to implement it like that we'd need a much more complicated setup.
If you would want to have multi master as a feature and are willing to help out with the implementation, feel free to open an issue and start the discussion regarding that.
@maennchen thanks, actually I hope any node shutdown won't affect the job(s) continue working on the rest nodes(if existed), each node of the clusters should be the same role/priority to process the distribution of worker processes.
I will keep watch the related issue.
When setup multi nodes at local, e.g.
iex --name ac1@127.0.0.1 -S mix iex --name ac2@127.0.0.1 -S mix iex --name ac3@127.0.0.1 -S mix
After running these nodes and manually ping them for each other.
Scheduler
config.exs
Using: gen_stage: 0.14.1 quantum: 2.3.3 swarm: 3.3.1
From observer's perspective, I find in
swarm
application, there is one "master" node (ac1@127.0.0.1, see the following screenshot) with JobBroadcaster/ExecutionBroadcaster/TaskRegistryThe prepared processes for quantum_demo are both ready:
If I close
ac2/ac3
node, this application still working, the restarted and joined nodes can keep run the job working as global.If I close
ac1@127.0.0.1
node, the reset cluster nodes won't work together any more with the following error:Could you please advise this is a known behavior/design?
If quantum elects one node as "master" by default, can we configure more nodes to be "master" for the availability of application?