kubernetes / kubeadm

Aggregator for issues filed against kubeadm
Apache License 2.0
3.76k stars 716 forks source link

unable to fetch the kubeadm-config ConfigMap: failed to getAPIEndpoint: failed to get APIEndpoint information for this node #1570

Closed shahbour closed 5 years ago

shahbour commented 5 years ago

I am trying to update Kubernetes Cluster from version 13.2 to 14.2 . i started with first two master nodes and it worked . when trying on the third it is giving me error

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
unable to fetch the kubeadm-config ConfigMap: failed to getAPIEndpoint: failed to get APIEndpoint information for this node

i tried to increase the log to see where the problem stand but i could not get any hint

[root@master03 ~]# kubeadm -v10  upgrade node experimental-control-plane
I0524 10:44:32.848001   32399 node.go:129] [upgrade] found NodeName empty; considered OS hostname as NodeName
I0524 10:44:32.848587   32399 interface.go:384] Looking for default routes with IPv4 addresses
I0524 10:44:32.848613   32399 interface.go:389] Default route transits interface "eno16777984"
I0524 10:44:32.849201   32399 interface.go:196] Interface eno16777984 is up
I0524 10:44:32.849352   32399 interface.go:244] Interface "eno16777984" has 2 addresses :[192.168.70.237/28 fe80::250:56ff:fe81:317b/64].
I0524 10:44:32.849428   32399 interface.go:211] Checking addr  192.168.70.237/28.
I0524 10:44:32.849452   32399 interface.go:218] IP found 192.168.70.237
I0524 10:44:32.849488   32399 interface.go:250] Found valid IPv4 address 192.168.70.237 for interface "eno16777984".
I0524 10:44:32.849509   32399 interface.go:395] Found active IP 192.168.70.237 
I0524 10:44:32.853182   32399 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0524 10:44:32.893830   32399 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.14.2 (linux/amd64) kubernetes/66049e3" -H "Accept: application/json, */*" 'https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config'
I0524 10:44:32.919454   32399 round_trippers.go:438] GET https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config 200 OK in 25 milliseconds
I0524 10:44:32.919501   32399 round_trippers.go:444] Response Headers:
I0524 10:44:32.919518   32399 round_trippers.go:447]     Date: Fri, 24 May 2019 10:44:32 GMT
I0524 10:44:32.919532   32399 round_trippers.go:447]     Content-Type: application/json
I0524 10:44:32.919545   32399 round_trippers.go:447]     Content-Length: 1143
I0524 10:44:32.919636   32399 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"65d9324c-090e-11e9-835f-0050568163fc","resourceVersion":"27300049","creationTimestamp":"2018-12-26T13:01:45Z"},"data":{"ClusterConfiguration":"apiServer:\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 192.168.70.234:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.14.2\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  master:\n    advertiseAddress: 192.168.70.232\n    bindPort: 6443\n  master01:\n    advertiseAddress: 192.168.70.236\n    bindPort: 6443\n  master02:\n    advertiseAddress: 192.168.70.237\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
I0524 10:44:32.923740   32399 round_trippers.go:419] curl -k -v -XGET  -H "User-Agent: kubeadm/v1.14.2 (linux/amd64) kubernetes/66049e3" -H "Accept: application/json, */*" 'https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy'
I0524 10:44:32.929643   32399 round_trippers.go:438] GET https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy 200 OK in 5 milliseconds
I0524 10:44:32.929695   32399 round_trippers.go:444] Response Headers:
I0524 10:44:32.929720   32399 round_trippers.go:447]     Content-Type: application/json
I0524 10:44:32.929740   32399 round_trippers.go:447]     Content-Length: 1744
I0524 10:44:32.929760   32399 round_trippers.go:447]     Date: Fri, 24 May 2019 10:44:32 GMT
I0524 10:44:32.929852   32399 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kube-proxy","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kube-proxy","uid":"66a06a66-090e-11e9-835f-0050568163fc","resourceVersion":"27300065","creationTimestamp":"2018-12-26T13:01:46Z","labels":{"app":"kube-proxy"}},"data":{"config.conf":"apiVersion: kubeproxy.config.k8s.io/v1alpha1\nbindAddress: 0.0.0.0\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 10\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf\n  qps: 5\nclusterCIDR: \"\"\nconfigSyncPeriod: 15m0s\nconntrack:\n  max: null\n  maxPerCore: 32768\n  min: 131072\n  tcpCloseWaitTimeout: 1h0m0s\n  tcpEstablishedTimeout: 24h0m0s\nenableProfiling: false\nhealthzBindAddress: 0.0.0.0:10256\nhostnameOverride: \"\"\niptables:\n  masqueradeAll: false\n  masqueradeBit: 14\n  minSyncPeriod: 0s\n  syncPeriod: 30s\nipvs:\n  excludeCIDRs: null\n  minSyncPeriod: 0s\n  scheduler: \"\"\n  strictARP: false\n  syncPeriod: 30s\nkind: KubeProxyConfiguration\nmetricsBindAddress: 127.0.0.1:10249\nmode: \"\"\nnodePortAddresses: null\noomScoreAdj: -999\nportRange: \"\"\nresourceContainer: /kube-proxy\nudpIdleTimeout: 250ms\nwinkernel:\n  enableDSR: false\n  networkName: \"\"\n  sourceVip: \"\"","kubeconfig.conf":"apiVersion: v1\nkind: Config\nclusters:\n- cluster:\n    certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\n    server: https://192.168.70.234:6443\n  name: default\ncontexts:\n- context:\n    cluster: default\n    namespace: default\n    user: default\n  name: default\ncurrent-context: default\nusers:\n- name: default\n  user:\n    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token"}}
I0524 10:44:32.933402   32399 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.14.2 (linux/amd64) kubernetes/66049e3" 'https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.14'
I0524 10:44:32.938667   32399 round_trippers.go:438] GET https://192.168.70.234:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.14 200 OK in 5 milliseconds
I0524 10:44:32.938703   32399 round_trippers.go:444] Response Headers:
I0524 10:44:32.938718   32399 round_trippers.go:447]     Content-Type: application/json
I0524 10:44:32.938735   32399 round_trippers.go:447]     Content-Length: 2138
I0524 10:44:32.938748   32399 round_trippers.go:447]     Date: Fri, 24 May 2019 10:44:32 GMT
I0524 10:44:32.939218   32399 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubelet-config-1.14","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubelet-config-1.14","uid":"2c1fa424-7e00-11e9-bd71-00505681317b","resourceVersion":"27300051","creationTimestamp":"2019-05-24T08:44:41Z"},"data":{"kubelet":"address: 0.0.0.0\napiVersion: kubelet.config.k8s.io/v1beta1\nauthentication:\n  anonymous:\n    enabled: false\n  webhook:\n    cacheTTL: 2m0s\n    enabled: true\n  x509:\n    clientCAFile: /etc/kubernetes/pki/ca.crt\nauthorization:\n  mode: Webhook\n  webhook:\n    cacheAuthorizedTTL: 5m0s\n    cacheUnauthorizedTTL: 30s\ncgroupDriver: cgroupfs\ncgroupsPerQOS: true\nclusterDNS:\n- 10.96.0.10\nclusterDomain: cluster.local\nconfigMapAndSecretChangeDetectionStrategy: Watch\ncontainerLogMaxFiles: 5\ncontainerLogMaxSize: 10Mi\ncontentType: application/vnd.kubernetes.protobuf\ncpuCFSQuota: true\ncpuCFSQuotaPeriod: 100ms\ncpuManagerPolicy: none\ncpuManagerReconcilePeriod: 10s\nenableControllerAttachDetach: true\nenableDebuggingHandlers: true\nenforceNodeAllocatable:\n- pods\neventBurst: 10\neventRecordQPS: 5\nevictionHard:\n  imagefs.available: 15%\n  memory.available: 100Mi\n  nodefs.available: 10%\n  nodefs.inodesFree: 5%\nevictionPressureTransitionPeriod: 5m0s\nfailSwapOn: true\nfileCheckFrequency: 20s\nhairpinMode: promiscuous-bridge\nhealthzBindAddress: 127.0.0.1\nhealthzPort: 10248\nhttpCheckFrequency: 20s\nimageGCHighThresholdPercent: 85\nimageGCLowThresholdPercent: 80\nimageMinimumGCAge: 2m0s\niptablesDropBit: 15\niptablesMasqueradeBit: 14\nkind: KubeletConfiguration\nkubeAPIBurst: 10\nkubeAPIQPS: 5\nmakeIPTablesUtilChains: true\nmaxOpenFiles: 1000000\nmaxPods: 110\nnodeLeaseDurationSeconds: 40\nnodeStatusReportFrequency: 1m0s\nnodeStatusUpdateFrequency: 10s\noomScoreAdj: -999\npodPidsLimit: -1\nport: 10250\nregistryBurst: 10\nregistryPullQPS: 5\nresolvConf: /etc/resolv.conf\nrotateCertificates: true\nruntimeRequestTimeout: 2m0s\nserializeImagePulls: true\nstaticPodPath: /etc/kubernetes/manifests\nstreamingConnectionIdleTimeout: 4h0m0s\nsyncFrequency: 1m0s\nvolumeStatsAggPeriod: 1m0s\n"}}
I0524 10:44:32.943421   32399 loader.go:359] Config loaded from file /etc/kubernetes/kubelet.conf
I0524 10:44:32.944092   32399 round_trippers.go:419] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.14.2 (linux/amd64) kubernetes/66049e3" 'https://192.168.70.234:6443/api/v1/nodes/master03'
I0524 10:44:32.949682   32399 round_trippers.go:438] GET https://192.168.70.234:6443/api/v1/nodes/master03 200 OK in 5 milliseconds
I0524 10:44:32.949716   32399 round_trippers.go:444] Response Headers:
I0524 10:44:32.949732   32399 round_trippers.go:447]     Content-Type: application/json
I0524 10:44:32.949745   32399 round_trippers.go:447]     Date: Fri, 24 May 2019 10:44:32 GMT
I0524 10:44:32.950129   32399 request.go:942] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"master03","selfLink":"/api/v1/nodes/master03","uid":"d57b4282-4177-11e9-ac0d-0050568163fc","resourceVersion":"27318582","creationTimestamp":"2019-03-08T07:57:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"master03","kubernetes.io/os":"linux","node-role.kubernetes.io/master":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"}},"spec":{"providerID":"vsphere://42018008-C622-5457-5072-DCA300AB0B1D","taints":[{"key":"node-role.kubernetes.io/master","value":"true","effect":"NoSchedule"}]},"status":{"capacity":{"cpu":"2","ephemeral-storage":"15866792Ki","hugepages-2Mi":"0","memory":"1882472Ki","pods":"110"},"allocatable":{"cpu":"2","ephemeral-storage":"14622835483","hugepages-2Mi":"0","memory":"1780072Ki","pods":"110"},"conditions":[{"type":"NetworkUnavailable","status":"False","lastHeartbeatTime":"2019-05-24T10:20:15Z","lastTransitionTime":"2019-05-24T10:20:15Z","reason":"WeaveIsUp","message":"Weave pod has set this"},{"type":"MemoryPressure","status":"False","lastHeartbeatTime":"2019-05-24T10:44:23Z","lastTransitionTime":"2019-05-24T10:19:11Z","reason":"KubeletHasSufficientMemory","message":"kubelet has sufficient memory available"},{"type":"DiskPressure","status":"False","lastHeartbeatTime":"2019-05-24T10:44:23Z","lastTransitionTime":"2019-05-24T10:19:11Z","reason":"KubeletHasNoDiskPressure","message":"kubelet has no disk pressure"},{"type":"PIDPressure","status":"False","lastHeartbeatTime":"2019-05-24T10:44:23Z","lastTransitionTime":"2019-05-24T10:19:11Z","reason":"KubeletHasSufficientPID","message":"kubelet has sufficient PID available"},{"type":"Ready","status":"True","lastHeartbeatTime":"2019-05-24T10:44:23Z","lastTransitionTime":"2019-05-24T10:19:21Z","reason":"KubeletReady","message":"kubelet is posting ready status"}],"addresses":[{"type":"InternalIP","address":"192.168.70.237"},{"type":"Hostname","address":"master03"}],"daemonEndpoints":{"kubeletEndpoint":{"Port":10250}},"nodeInfo":{"machineID":"cb78eedbee654c64a6a94404630dfae9","systemUUID":"42018008-C622-5457-5072-DCA300AB0B1D","bootID":"d93347b7-379d-4caa-8007-c9a9c2d68d96","kernelVersion":"3.10.0-957.5.1.el7.x86_64","osImage":"CentOS Linux 7 (Core)","containerRuntimeVersion":"docker://18.9.3","kubeletVersion":"v1.14.2","kubeProxyVersion":"v1.14.2","operatingSystem":"linux","architecture":"amd64"},"images":[{"names":["cnastorage/enablevcp@sha256:e870a12fdb8be23b873c2e2cefb72f94c9fe5f2dadbe59fc58220300b052adae","cnastorage/enablevcp:v1"],"sizeBytes":534158993},{"names":["k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20","k8s.gcr.io/etcd:3.2.24"],"sizeBytes":219655340},{"names":["k8s.gcr.io/kube-apiserver@sha256:4b41f5d80026821a127cff6377f021d955bf1fa23c1f73df80cf4bbe4070a9db","k8s.gcr.io/kube-apiserver:v1.13.1"],"sizeBytes":180906066},{"names":["weaveworks/weave-kube@sha256:f1b6edd296cf0b7e806b1a1a1f121c1e8095852a4129edd08401fe2e7aab652d","weaveworks/weave-kube:2.5.0"],"sizeBytes":148083959},{"names":["k8s.gcr.io/kube-controller-manager@sha256:bccb975718434a5201a2cbfb2456bdb00be832e101308d374df60c14c001d7ff","k8s.gcr.io/kube-controller-manager:v1.13.1"],"sizeBytes":146191122},{"names":["k8s.gcr.io/kube-proxy@sha256:c91687ff6145f7fcdc7d8b3da3c530a9a9b50f0734c7a945a0d9c18fc3790dbc","k8s.gcr.io/kube-proxy:v1.14.2"],"sizeBytes":82106236},{"names":["k8s.gcr.io/kube-proxy@sha256:0b0284fb0f630be8cc3491e180f7032c3d6c7fc2904f58f319acaf8e7bdbecd7","k8s.gcr.io/kube-proxy:v1.13.1"],"sizeBytes":80222128},{"names":["k8s.gcr.io/kube-scheduler@sha256:4165e5f0d569b5b5e3bd90d78c30c5408b2c938d719939490299ab4cee9a9c0f","k8s.gcr.io/kube-scheduler:v1.13.1"],"sizeBytes":79582322},{"names":["fluent/fluent-bit@sha256:e2846270454ca731cb366a5d634bce3110bd90962d78b769684f9b45119537e9","fluent/fluent-bit:1.0.1"],"sizeBytes":53052062},{"names":["weaveworks/weave-npc@sha256:5bc9e4241eb0e972d3766864b2aca085660638b9d596d4fe761096db46a8c60b","weaveworks/weave-npc:2.5.0"],"sizeBytes":49506380},{"names":["prom/node-exporter@sha256:c390c8fea4cd362a28ad5070aedd6515aacdfdffd21de6db42ead05e332be5a9","prom/node-exporter:v0.17.0"],"sizeBytes":20982005},{"names":["k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea","k8s.gcr.io/pause:3.1"],"sizeBytes":742472}]}}
unable to fetch the kubeadm-config ConfigMap: failed to getAPIEndpoint: failed to get APIEndpoint information for this node
SataQiu commented 5 years ago

Thanks for reporting it @shahbour Did you set master03 as a control plane node correctly before upgrading? I don't see master03 in the log :( Your kubeadm-config ConfigMap contains only information about master, master01 and master02.

I0524 10:44:32.919636   32399 request.go:942] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"65d9324c-090e-11e9-835f-0050568163fc","resourceVersion":"27300049","creationTimestamp":"2018-12-26T13:01:45Z"},"data":{"ClusterConfiguration":"apiServer:\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta1\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrolPlaneEndpoint: 192.168.70.234:6443\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.14.2\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: \"\"\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  master:\n    advertiseAddress: 192.168.70.232\n    bindPort: 6443\n  master01:\n    advertiseAddress: 192.168.70.236\n    bindPort: 6443\n  master02:\n    advertiseAddress: 192.168.70.237\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta1\nkind: ClusterStatus\n"}}
shahbour commented 5 years ago

No idea how this happened , i do have master, master02 and master03 . I did fix the names with ips and now it did do the update perfectly .

(May be i did some thing wrong when I updated my cluster from one master to multiple master )

Thanks for the support