Closed shahbour closed 5 years ago
Thanks for reporting it @shahbour
Did you set master03
as a control plane node correctly before upgrading?
I don't see master03
in the log :(
Your kubeadm-config ConfigMap contains only information about master
, master01
and master02
.
I0524 10:44:32.919636 32399 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"65d9324c-090e-11e9-835f-0050568163fc","resourceVersion":"27300049","creationTimestamp":"2018-12-26T13:01:45Z"},"data":{"ClusterConfiguration":"apiServer:\n extraArgs:\n authorization-mode: Node,RBAC\n timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 192.168.70.234:6443\ncontrollerManager: {}\ndns:\n type: CoreDNS\netcd:\n local:\n dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.14.2\nnetworking:\n dnsDomain: cluster.local\n podSubnet: \"\"\n serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n master:\n advertiseAddress: 192.168.70.232\n bindPort: 6443\n master01:\n advertiseAddress: 192.168.70.236\n bindPort: 6443\n master02:\n advertiseAddress: 192.168.70.237\n bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
No idea how this happened , i do have master, master02 and master03 . I did fix the names with ips and now it did do the update perfectly .
(May be i did some thing wrong when I updated my cluster from one master to multiple master )
Thanks for the support
I am trying to update Kubernetes Cluster from version 13.2 to 14.2 . i started with first two master nodes and it worked . when trying on the third it is giving me error
i tried to increase the log to see where the problem stand but i could not get any hint