Closed toschneck closed 1 month ago
/label customer-request
This seems to be an issue with certificates of kubelet.
To add on what @kron4eg said, please check if CSRs are getting properly approved, e.g. kubectl get csr
and then approve all pending CSRs, then try again after a few minutes. If it doesn't work after approving all CSRs, try to restart kubelet.
After some debugging the Problem seams to be the custom CA Bundle. The caBundle:
option seams to override or miss render the k8s CA.
1) extract from the kubernetes kubeconfig of the cluster certificate-authority-data:
2) decode via base64 -d
3) add the kubernetes cluster ca to the caBundle
field:
apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
versions:
kubernetes: '1.29.8'
cloudProvider:
aws: {}
external: true
clusterNetwork:
cni:
cilium:
kubeProxyReplacement: "strict"
enableHubble: true
kubeProxy:
skipInstallation: true
addons:
enable: true
addons:
- name: cluster-autoscaler
- name: default-storage-class
caBundle: |-
## CUSTOM CA BUNDLE
## FIXME: Temporary Workaround adding CA of Kubernetes Cluster itself -> extracted from kubeconfig
# CA Kubeconfig of cluster
-----BEGIN CERTIFICATE-----
xxxx
-----END CERTIFICATE-----
4) Delete the machinedeployment, so that the OSC get recreated
kubectl delete md -n kube-system --all
5) re-apply the machine deployment
kubectl apply -f machinedeployment.yaml
The Problems seam the OSC what doesn't get the k8s ca at this point and instead the caBundle:
- content:
inline:
data: "## CUSTOM CA BUNDLE \n## FIXME: Temporary Workaround adding CA of
Kubernetes Cluster itself -> extracted from kubeconfig\n\n# CA Kubeconfig
of AWS Seed cluster\n-----BEGIN CERTIFICATE-----xxxxxxxxxxxxxx\n-----END
CERTIFICATE-----\n"
encoding: b64
path: /etc/kubernetes/pki/ca.crt
permissions: 644
It seam the OSP var is is not renderd correctly:
- path: /etc/kubernetes/pki/ca.crt
content:
inline:
encoding: b64
data: |
{{ .KubernetesCACert }}
I think this is related to #399 (being a fix for it) maybe?
Indeed, and the fix has not been released (and included in K1) yet.
So we just need a new release!
Awesome so it's already cherry picked for next kubeone 1.8 release?
@toschneck and KKP too of course.
OK, we have new v1.5.3 release
What happened?
After Setup of kubeone cluster at AWS any
kubectl logs
orkubectl exec
command fails if the target is a worker machine from the machinecontroller.The workload of the cluster itself seems not to be effected, but administration is hard restricted so this needs to get fixed soon.
Expected behavior
Command
kubectl logs
kubectl exec
does work for all nodes.How to reproduce the issue?
Create a fresh kubeone 1.8.2 cluster
What KubeOne version are you using?
Provide your KubeOneCluster manifest here (if applicable)
What cloud provider are you running on?
AWS
What operating system are you running in your cluster?
Ubuntu 22.04
Additional information
It seams somehow that the kubelet config of the nodes are not correct or the communication from kubelet to control plane is broken. Relevant stack overflow: https://stackoverflow.com/a/50020792
kubectl
with the admin/super-admin kubeconfig also works not from the control plane machines: