Closed Huskydog9988 closed 9 months ago
I suspect this is a perms issue with anon auth
joining nodes with bootstrap tokens (kubeadm default) requires anon auth. alternatively a kubeconfig with certs can be used, but not sure kubespray supports that.
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#file-or-https-based-discovery
note: CA validation should be disabled too. check the above join doc for more info.
I'm fairly new to k8s, so I think I need this spelled out for me. The solution your suggesting is that I used certs to connect the nodes instead of the tokens kubespray is using now? If so, could you point me to where I might find the necessary config values?
If so, could you point me to where I might find the necessary config values?
unclear to me if this is possible. i will leave it to the kubespray maintainers to respond.
AFAIK you should be getting this error when you disable kube_api_anonymous_auth
but it doesn't appear to be the case here? Did you set this to false in a previous run maybe? If so I would recommend that you run a reset.yml if you can trash the cluster if not maybe you can try to run the upgrade cluster playbook.
joining nodes with bootstrap tokens (kubeadm default) requires anon auth. alternatively a kubeconfig with certs can be used, but not sure kubespray supports that.
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/#file-or-https-based-discovery
note: CA validation should be disabled too. check the above join doc for more info.
Yeah last time I checked anon auth disabled was broken in Kubespray
I most certainty did try it with kube_api_anonymous_auth
disabled, but I think I ran the reset playbook after. I'll try the upgrade playbook first when I get chance, I'll let you know if either doesn't resolve the issue.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Was anyone able to get this work? I only run into this on 22.04
Not sure why it is closed, the issue is still here. I ran into this on Debian 12 with kube_api_anonymous_auth: true
. I ran upgrade-cluster.yml
playbook and it didn't help. reset.yml
did help tho.
Hey, is there a solution here yet? I ran Reset.yml but unfortunately that didn't help.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Environment:
Cloud provider or hardware configuration:
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Version of Ansible (
ansible --version
):Version of Python (
python --version
):Python 3.8.10
Kubespray version (commit) (
git rev-parse --short HEAD
):2ae3ea9ee
Network plugin used:
cilium
Full inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):https://gist.github.com/Huskydog9988/19cedb17c3c416db98cf908779c07da0 (Don't worry about secrets this is just a test cluster.)
Command used to invoke ansible:
ansible-playbook -i /inventory/inventory.ini --become --become-user=root --ask-become-pass -e "@/hardening.yaml" cluster.yml
Output of ansible run:
Anything else do we need to know:
I suspect this is a perms issue with anon auth, and the provided hardened config but I'm not sure what the exact cause is. (#9474 seems to be a similar issue.)