SUSE / caasp-salt

A collection of salt states used to provision a kubernetes cluster
Apache License 2.0
64 stars 29 forks source link

[3.0][k8s 1.10] don't run haproxy states when not really needed #687

Closed MaximilianMeister closed 5 years ago

MaximilianMeister commented 5 years ago

in case of a kubernetes update from 1.9 to 1.10 we can't afford to stop kubernetes through the haproxy states, because it will not be able to restart as the --config file flag has changed between those releases

the update orchestration fails in the sanity check of the state all-workers-3.0-pre-clean-shutdown because the new kubelet configuration is already applied, but the old kubernetes version is still running before the reboot

This is a corner case and our other states would have to be adapted as well to re-run configs when a node gets accidentally rebooted and the config hasn't been applied yet.

Furthermore this is only an issue coming from v2 during migration to v3 - so the case that this happens is even rarer.

Trying to run this state on each worker would require a check for /etc/caasp/haproxy/haproxy.cfg to safely determine if it needs to be run or not, but it is not possible to use salt runners with a target to determine if this file exists on all worker nodes.

salt.runners.salt.cmd doesn't accept targets salt.runners.salt.execute only exists since salt2017.7.0 which might not be present yet for a user that hasn't installed the salt upgrade yet.

bsc#1114645

Signed-off-by: Maximilian Meister mmeister@suse.de (cherry picked from commit 11c82a549ea9284374507e86319a4d0c71fa6b78)