Closed jakolehm closed 6 years ago
See https://github.com/kontena/pharos-cluster/issues/448#issuecomment-399885443 for kubeadm kubelet --resolv-conf=
changes.
Some relevant points from the release notes: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md
Pharos::Kube::Stack
are now case-sensitive... with the direct YAML -> JSON conversion that we're doing, this might mean that some of our YAML files are now broken?v1alpha2
with major changes:
.noTaintMaster
=> .nodeRegistration.taints
.CloudProvider
was removed...--cloud-provider=external
?.NodeName
=> .NodeRegistration.Name
.CRISocket
=> .NodeRegistration.CRISocket
--endpoint-reconciler-type
default is now lease
, and the old master-count
reconciler with --apiserver-count
is deprecated/etc/systemd/system/kubelet.service.d/*.conf
dropins are differentetcd should be upgraded from 3.1 -> 3.2
[WARNING ExternalEtcdVersion]: this version of kubeadm only supports external etcd version >= 3.2.17. Current version: 3.1.12
For the 1.10 -> 1.11 upgrade, kubeadm has a new kubeadm upgrade node config
command: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-11/#upgrade-kubelet-on-each-node
Need to check out what it actually does, and integrate it into the MigrateWorker
phase...
Upgrading the kubelet
+ kubeadm
packages leaves the kubelet in a restart loop due to the new --config=
:
Jul 04 11:36:08 terom-pharos-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jul 04 11:36:08 terom-pharos-worker1 kubelet[11219]: I0704 11:36:08.791788 11219 feature_gate.go:230] feature gates: &{map[]}
Jul 04 11:36:08 terom-pharos-worker1 kubelet[11219]: F0704 11:36:08.792323 11219 server.go:190] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
Jul 04 11:36:08 terom-pharos-worker1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jul 04 11:36:08 terom-pharos-worker1 systemd[1]: kubelet.service: Unit entered failed state.
Jul 04 11:36:08 terom-pharos-worker1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
I wonder if kube-proxy
is supposed to default to ipvs
now? The https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ --proxy-mode
docs may just be outdated...
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
https://github.com/kubernetes/sig-release/blob/master/releases/release-1.11/release-1.11.md