Closed chungers closed 6 years ago
Merging #839 into master will increase coverage by
0.06%
. The diff coverage is93.1%
.
@@ Coverage Diff @@
## master #839 +/- ##
==========================================
+ Coverage 48.24% 48.31% +0.06%
==========================================
Files 89 89
Lines 7980 7994 +14
==========================================
+ Hits 3850 3862 +12
- Misses 3773 3774 +1
- Partials 357 358 +1
Impacted Files | Coverage Δ | |
---|---|---|
pkg/plugin/group/scaler.go | 89.47% <100%> (+0.06%) |
:arrow_up: |
pkg/plugin/group/state.go | 83.33% <100%> (+2.08%) |
:arrow_up: |
pkg/plugin/group/quorum.go | 90.24% <100%> (+0.12%) |
:arrow_up: |
pkg/plugin/group/group.go | 45.54% <100%> (+0.28%) |
:arrow_up: |
pkg/plugin/group/testplugin.go | 76.47% <100%> (+0.28%) |
:arrow_up: |
pkg/plugin/group/rollingupdate.go | 89.74% <75%> (-2.15%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 35f95ad...a5f2b00. Read the comment docs.
This PR provides one implementation to address the leadership handover required by #838
Since there isn't an explicit way to demote a leader in Swarm mode (even though it's possible to demote a manager to worker), this implementation tries to work around this limitation by making sure the leader node is either never destroyed or is the very last one. In the latter case, the leadership will be transferred to another manager node that has been updated. The cluster should then be able to complete the rolling update by provisioning the very last node, the previous leader.
This PR allows the user to specify an option
INFRAKIT_GROUP_POLICY_LEADER_SELF_UPDATE
to be eitherlast
ornever
. If the policy is set to belast
, then the leader node (as determined by the node running the code and its own self LogicalID == the LogicalID of the instance) will always be the last node to be destroyed. If the policy is set to benever
, then the node is always considered 'desirable' and therefore left untouched during the rolling update process where 'undesirable' nodes are destroyed and re-provisioned.Signed-off-by: David Chung david.chung@docker.com