Closed ereslibre closed 6 years ago
Upgrade is working.
Confirmed again with this exact PR: Upgraded a 3+2 cluster directly from 2.0 GM to 3.0:
Summary for admin.infra.caasp.local_master
-------------
Succeeded: 62 (changed=44)
Failed: 0
-------------
Total states run: 62
Total run time: 1109.206 s
and:
admin:~ # kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-0 Ready master 15m v1.9.7 <none> SUSE CaaS Platform 3.0 4.4.126-94.22-default docker://17.9.1
master-1 Ready master 15m v1.9.7 <none> SUSE CaaS Platform 3.0 4.4.126-94.22-default docker://17.9.1
master-2 Ready master 15m v1.9.7 <none> SUSE CaaS Platform 3.0 4.4.126-94.22-default docker://17.9.1
worker-0 Ready <none> 15m v1.9.7 <none> SUSE CaaS Platform 3.0 4.4.126-94.22-default docker://17.9.1
worker-1 Ready <none> 15m v1.9.7 <none> SUSE CaaS Platform 3.0 4.4.126-94.22-default docker://17.9.1
This way we don't try to uncordon the node in the
kubelet/init.sls
file, required for example byhaproxy
, that will end up in the machine trying to early uncordon itself (whenhaproxy
configuration hasn't been written yet, and leading to early failure).Splitting this action and called only when required (this is: the update process) is safer.
Fixes: bsc#1080978
(cherry picked from commit 02f063385e3a8cd435a76280ca246a87099a01d5)
Backport of https://github.com/kubic-project/salt/pull/584