Closed HeroCC closed 1 month ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I'm having the same issue. I tried the methods from https://github.com/kubernetes/kubectl/issues/1405#issuecomment-1497900520 but got the following error:
$ kubectl delete secret kubeadm-certs -n kube-system
Error from server: illegal base64 data at input byte 3
Unfortunately, it looks like a few people had that issue so we have almost nothing to search on the internet. Anyway, I'm stuck at this point.
I tried the hard way: deleting the key from etcd.
# Run in the etcd container (in my context):
ETCDCTL_API=3 etcdctl del /registry/secrets/kube-system/kubeadm-certs
I won't recommend this at all unless you lost all hope before creating a whole new cluster :).
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Environment:
Baremetal / VMs
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):I'm running Ansible from MacOS.
ansible --version
):python --version
):Kubespray version (commit) (
git rev-parse --short HEAD
):Network plugin used:
Weave
Full inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):Details
``` ❯ ansible -i inventory/mycluster/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]" | sed "s/conlanc/me/g" home-kube-master | SUCCESS => { "hostvars[inventory_hostname]": { "ansible_check_mode": false, "ansible_config_file": null, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "10.0.1.16", "ansible_inventory_sources": [ "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster/inventory.ini" ], "ansible_playbook_python": "/usr/local/Cellar/ansible/8.1.0/libexec/bin/python3.11", "ansible_ssh_user": "cc", "ansible_verbosity": 0, "ansible_version": { "full": "2.15.1", "major": 2, "minor": 15, "revision": 1, "string": "2.15.1" }, "bin_dir": "/usr/local/bin", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 1, "docker_storage_options": "-s overlay2", "etcd_data_dir": "/var/lib/etcd", "etcd_deployment_type": "docker", "etcd_kubeadm_enabled": false, "group_names": [ "etcd", "k8s_cluster", "kube_control_plane" ], "groups": { "all": [ "home-kube-master", "home-kube-worker1" ], "calico_rr": [], "etcd": [ "home-kube-master" ], "k8s_cluster": [ "home-kube-master", "home-kube-worker1" ], "kube_control_plane": [ "home-kube-master" ], "kube_node": [ "home-kube-worker1" ], "ungrouped": [] }, "inventory_dir": "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster", "inventory_file": "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster/inventory.ini", "inventory_hostname": "home-kube-master", "inventory_hostname_short": "home-kube-master", "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "no_proxy_exclude_workers": false, "omit": "__omit_place_holder__470692146cdc8fba57d54604cb411cf646ea398d", "playbook_dir": "/Users/me/Documents/infra/kube-deploy/kubespray" } } home-kube-worker1 | SUCCESS => { "hostvars[inventory_hostname]": { "ansible_check_mode": false, "ansible_config_file": null, "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_host": "10.0.1.17", "ansible_inventory_sources": [ "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster/inventory.ini" ], "ansible_playbook_python": "/usr/local/Cellar/ansible/8.1.0/libexec/bin/python3.11", "ansible_ssh_user": "cc", "ansible_verbosity": 0, "ansible_version": { "full": "2.15.1", "major": 2, "minor": 15, "revision": 1, "string": "2.15.1" }, "bin_dir": "/usr/local/bin", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 1, "docker_storage_options": "-s overlay2", "etcd_data_dir": "/var/lib/etcd", "etcd_kubeadm_enabled": false, "group_names": [ "k8s_cluster", "kube_node" ], "groups": { "all": [ "home-kube-master", "home-kube-worker1" ], "calico_rr": [], "etcd": [ "home-kube-master" ], "k8s_cluster": [ "home-kube-master", "home-kube-worker1" ], "kube_control_plane": [ "home-kube-master" ], "kube_node": [ "home-kube-worker1" ], "ungrouped": [] }, "inventory_dir": "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster", "inventory_file": "/Users/me/Documents/infra/kube-deploy/kubespray/inventory/mycluster/inventory.ini", "inventory_hostname": "home-kube-worker1", "inventory_hostname_short": "home-kube-worker1", "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "no_proxy_exclude_workers": false, "omit": "__omit_place_holder__470692146cdc8fba57d54604cb411cf646ea398d", "playbook_dir": "/Users/me/Documents/infra/kube-deploy/kubespray" } } ```
Command used to invoke ansible:
Output of ansible run:
The only error printed is this:
When running the command by hand, I get the same error:
I'm also now unable to edit or delete the secret at all:
Anything else do we need to know:
This error first occurred when attempting to update to Kubespray v2.16.0 from 2.15. I tried bumping to 2.17 to see if it was resolved between releases, no dice. The cluster was in fine working order before, but I'm getting this error now, and many functions of the cluster are degraded. I have tried rerunning the playbook, and a quick scan through the actual certificates seem fine, so I'm not sure what else to do here, especially since I can't even delete the broken secret. I'd appreciate any help you can give me!