Open linuxgood1230 opened 7 years ago
These scenarios do not invoke a recovery. Where do you expect to get a successorKey
? Can you elaborate on what it is you wish to achieve?
I will register all hosts in orchestrator to consul-kv. consul-kv auto config haproxy。 When leaf slave down, I use command line replace consul-kv data to update haproxy But above scenarios do not return succKey. I can not replace failedKey by successorKey
@linuxgood1230 sorry, I still find it difficult to follow up.
orchestrator
does not recover leaf nodes, only masters and intermediate masters. I don't see the connection between your desire to keep consul-kv up to date, and the failover mechanism.
Did you wish for orchestrator
to let you know whenever a leaf node was down? Please do try and elaborate.
Thanks I want to to kown which leaf node down, and which one can replace it.
orchestrator
does not replace leaf nodes, and I don't know what is means to replace a leaf node. Replace with anther leaf node? What replaces that other leaf node?
I would very much like to assist here, but unfortunately I don't get a good grasp of your problem. Please see if you can tell the story in more detail or else I don't know what to make of it.
Thanks A leaf node node, May it's master or bother node can replace the leaf node. I want to replace the failed instance In harpoxy config.
I see orchestrator use consul-kv storeage the instance key and status. I will test.
Again, orchestrator
does not handle or register death of leaf nodes. KV will be unaffected. I suggest you just use the API
for /api/instances/cluster/<alias>
and figure out from there which instance is dead.
Handler DeadSlave / DeadSlaves Then we can mange all instantance by orchestrator . use orchestrator hook process to update proxy
use orchestrator hook process to update proxy
May I suggest a different approach, illustrated in Context aware MySQL pools via HAProxy?
Otherwise, if you wish orchestrator
to be your source of truth, and update proxy by orchestrator
, then please revisit my suggestion above and use /api/instances/cluster/<alias>
, where you will find IsLastCheckValid
as a liveness indication.
Thanks very much I use orchestrator as trigger . It trigger all ops (check, notify which instance ok, which one failed, and which can replace the failed when). Now about DeadSlave/DeadSlaves i try add a Hook to notified a shell cmd , to update consul-kv ,then haproxy config which update.
Conext aware MySQL pools via haproxy. Too many source of truth. I want use orchestrator make all decisions. orchestrator knowns the percent failed instance(may on single physical machine, or single DC). It make decision how to recovery topology
I have add an DeadLeafSlave analysis. trigger executeProcess Thanks very much. I will push my code,when my production ready in our company
This should always return Succ Host if exists. To replace failed host.