Closed omegazeng closed 5 years ago
Thanks for reporting it @omegazeng
Could you provide more information about this issue ?
What is the output of kubectl get cm cluster-info -oyaml -n kube-public
?
And how did you change controlPlaneEndpoint "master.HIDE.xyz" to "k8s-master.HIDE.xyz" ?
@SataQiu Thank you!
kubectl get cm cluster-info -oyaml -n kube-public
apiVersion: v1
data:
jws-kubeconfig-38t43b: eyJhbGciOiJIUzIXXXXXXXXXXX..1JkUOYiNFu5wkxkXXXXXX
jws-kubeconfig-3dn914: eyJhbGciOiJIUzIXXXXXXXXXXX..eAiFwHUpd2RPJoBXXXXXX
kubeconfig: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: XXXXXXXXXXXXXXXXXXXXXXX
server: https://master.HIDE.xyz:6443
name: ""
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
kind: ConfigMap
metadata:
creationTimestamp: "2019-01-17T12:30:54Z"
name: cluster-info
namespace: kube-public
resourceVersion: "34316788"
selfLink: /api/v1/namespaces/kube-public/configmaps/cluster-info
uid: bb9ae460-1a53-11e9-b46c-000c296fd64f
I found server: https://master.HIDE.xyz:6443 Is safe to edit cm cluster-info directly?
And how did you change controlPlaneEndpoint "master.HIDE.xyz" to "k8s-master.HIDE.xyz" ?
update kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "k8s-master.HIDE.xyz" ### ADD
- "master.HIDE.xyz"
controlPlaneEndpoint: "k8s-master.HIDE.xyz:6443" ### Modify
etcd:
external:
endpoints:
- https://172.16.10.136:2379
- https://172.16.10.137:2379
- https://172.16.10.138:2379
caFile: /etc/ssl/etcd/etcd-root-ca.pem
certFile: /etc/ssl/etcd/etcd.pem
keyFile: /etc/ssl/etcd/etcd-key.pem
networking:
# This CIDR is a calico default. Substitute or remove for your CNI provider.
podSubnet: "192.168.0.0/16"
imageRepository: gcr.azk8s.cn/google_containers
# renew certs
kubeadm init phase certs apiserver --config kubeadm-config.yaml
# upgrade
kubeadm upgrade apply --config kubeadm-config.yaml
# restart kubelet
systemctl restart kubelet.service
# check config
kubeadm config view
# check certSANs
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text
detail: https://github.com/kubernetes/kubeadm/issues/1447#issuecomment-489513430 https://github.com/kubernetes/kubeadm/issues/1447#issuecomment-490434779
As far as I know, kubeadm will use cluster-info
ConfigMap as the cluster configuration.
This problem may be due to incomplete modifications.
I think you can try to edit cluster-info
ConfigMap directly.
As far as I know, kubeadm will use
cluster-info
ConfigMap as the cluster configuration. This problem may be due to incomplete modifications. I think you can try to editcluster-info
ConfigMap directly.
Rerun kubeadm join
I0605 17:20:12.457742 19165 join.go:427] [preflight] Discovering cluster-info
I0605 17:20:12.457886 19165 token.go:200] [discovery] Trying to connect to API Server "k8s-master.HIDE.xyz:6443"
I0605 17:20:12.458624 19165 token.go:75] [discovery] Created cluster-info discovery client, requesting info from "https://k8s-master.HIDE.xyz:6443"
I0605 17:20:22.952489 19165 token.go:141] [discovery] Requesting info from "https://k8s-master.HIDE.xyz:6443" again to validate TLS against the pinned public key
I0605 17:20:32.987015 19165 token.go:164] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-master.HIDE.xyz:6443"
I0605 17:20:32.987057 19165 token.go:206] [discovery] Successfully established connection with API Server "k8s-master.HIDE.xyz:6443"
I0605 17:20:32.987089 19165 join.go:441] [preflight] Fetching init configuration
I0605 17:20:32.987107 19165 join.go:474] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I0605 17:20:33.002346 19165 preflight.go:101] [preflight] Running configuration dependant checks
I0605 17:20:33.002378 19165 controlplaneprepare.go:207] [download-certs] Skipping certs download
I0605 17:20:33.002392 19165 kubelet.go:105] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0605 17:20:33.072207 19165 kubelet.go:130] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0605 17:20:33.165665 19165 kubelet.go:147] [kubelet-start] Starting the kubelet
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0605 17:20:44.259624 19165 kubelet.go:165] [kubelet-start] preserving the crisocket information for the node
I0605 17:20:44.259653 19165 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "worker-172-16-7-51" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
kubectl get no
NAME STATUS ROLES AGE VERSION
master-172-16-10-136 Ready master 138d v1.14.2
master-172-16-10-137 Ready master 138d v1.14.2
master-172-16-10-138 Ready master 138d v1.14.2
worker-172-16-10-139 Ready <none> 138d v1.14.2
worker-172-16-10-140 Ready <none> 138d v1.14.2
worker-172-16-10-141 Ready <none> 138d v1.14.2
worker-172-16-10-142 Ready <none> 63d v1.14.2
worker-172-16-10-143 Ready <none> 63d v1.14.2
worker-172-16-10-145 Ready <none> 63d v1.14.2
worker-172-16-10-169 Ready <none> 4d17h v1.14.2
worker-172-16-10-170 Ready <none> 4d17h v1.14.2
worker-172-16-10-171 Ready <none> 4d17h v1.14.2
worker-172-16-7-51 Ready <none> 2m14s v1.14.2
Thank you again!
I think "kubadm upgrade" should sync update ConfigMap cluster-info.
You are welcome! :blush:
In my case I've recreated the token: kubeadm token create --print-join-command
and everything was fine.
In my case I've recreated the token:
kubeadm token create --print-join-command
and everything was fine.
a little different, I ran the kubeadm join but failed, because I changed the K8s endpoint address.
Is this a request for help?
yes
What keywords did you search in kubeadm issues before filing this one?
kubeadm join unable to fetch the kubeadm-config ConfigMap controlPlaneEndpoint
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):uname -a
):What happened?
create join token on master node:
add new worker node
https://github.com/kubernetes/kubeadm/issues/1447#issuecomment-490434779 I know "master.HIDE.xyz" can not resolve, because I have changed controlPlaneEndpoint "master.HIDE.xyz" to "k8s-master.HIDE.xyz" and delete A recored for "master.HIDE.xyz". but why fetch the kubeadm-config ConfigMap from https://master.HiDE.xyz:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config not https://k8s-master.HiDE.xyz:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config
kubeadm-config
What you expected to happen?
add worker node
How to reproduce it (as minimally and precisely as possible)?
rerun kubeadm join
Anything else we need to know?
Thanks!