dyrnq / kubeadm-vagrant

Run kubernetes cluster with kubeadm on vagrant.
1 stars 5 forks source link

error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace. #11

Open dyrnq opened 3 years ago

dyrnq commented 3 years ago
    master2:  >>>   joining master node..
    master2: [preflight] Running pre-flight checks
    master2: W0913 07:52:00.257238   26568 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
    master2: [reset] No etcd config found. Assuming external etcd
    master2: [reset] Please, manually reset etcd to prevent further issues
    master2: [reset] Stopping the kubelet service
    master2: [reset] Unmounting mounted directories in "/var/lib/kubelet"
    master2: W0913 07:52:00.285141   26568 cleanupnode.go:99] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
    master2: [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    master2: [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    master2: [reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]
    master2: 
    master2: The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
    master2: 
    master2: The reset process does not reset or clean up iptables rules or IPVS tables.
    master2: If you wish to reset iptables, you must do so manually by using the "iptables" command.
    master2: 
    master2: If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
    master2: to reset your system's IPVS tables.
    master2: 
    master2: The reset process does not clean your kubeconfig files and you must remove them manually.
    master2: Please, check the contents of the $HOME/.kube/config file.
    master2: [preflight] Running pre-flight checks
    master2: [preflight] Reading configuration from the cluster...
    master2: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    master2: [preflight] Running pre-flight checks before initializing the new control plane instance
    master2: [preflight] Pulling images required for setting up a Kubernetes cluster
    master2: [preflight] This might take a minute or two, depending on the speed of your internet connection
    master2: [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    master2: [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    master2: error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace. This Secret might have expired. Please, run `kubeadm init phase upload-certs --upload-certs` on a control plane to generate a new one
    master2: To see the stack trace of this error execute with --v=5 or higher
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
dyrnq commented 3 years ago

there is a bootstrap-token- whitch DESCRIPTION is Proxy for managing TTL for the kubeadm-certs secret has 1h ttl default. After 1h the secret bootstrap-token- expired,at the same time the kubeadm-certs will expired too.

find the kubeadm-certs`s ownerReferences will find the bootstrap-token- secret

kubectl -n kube-system get secret kubeadm-certs -o jsonpath='{ .metadata.ownerReferences[0].name }'

your may need rerun the cmd on the first control-plane node

kubeadm init phase upload-certs --upload-certs --config /tmp/kubeadm-config.yaml -v5

make bootstrap-token- never expired

kubectl -n kube-system get secret "$(kubectl -n kube-system get secret kubeadm-certs -o jsonpath='{ .metadata.ownerReferences[0].name }')" -o yaml > token.yaml && \
sed -i "/  expiration:.*$/d" token.yaml && \
kubectl replace -f token.yaml
dyrnq commented 3 years ago

if no kubeadm-config.yaml saved

kubectl -n kube-system get cm kubeadm-config -o json |jq -r '.data.ClusterConfiguration' > /tmp/kubeadm-config.yaml
kubeadm init phase upload-certs --upload-certs --config /tmp/kubeadm-config.yaml -v5