Open danieljkemp opened 1 year ago
Hi @danieljkemp, Thanks for trying out BYOH
, This seems like an RBAC issue. Did you follow the steps in the getting started guide to create the bootstrap kubeconfig[here] for the initial one-time use in the host? This provides a bootstrap token kubeconfig with the required permissions to create CSR.
I did, and I got the bootstrap config from the statue field as described.
On Wed, Dec 7, 2022 at 4:20 AM Dharmjit Singh @.***> wrote:
Hi @danieljkemp https://github.com/danieljkemp, Thanks for trying out BYOH, This seems like an RBAC issue. Did you follow the steps in the getting started guide to create the bootstrap kubeconfig[here https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/blob/main/docs/getting_started.md#generating-the-bootstrap-kubeconfig-file] for the initial one-time use in the host? This provides a bootstrap token kubeconfig with the required permissions to create CSR.
— Reply to this email directly, view it on GitHub https://github.com/vmware-tanzu/cluster-api-provider-bringyourownhost/issues/754#issuecomment-1340635611, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACON7OISNIBRB2ROT2CLUV3WMBJEDANCNFSM6AAAAAASUW3L4I . You are receiving this because you were mentioned.Message ID: <vmware-tanzu/cluster-api-provider-bringyourownhost/issues/754/1340635611@ github.com>
Same error on k8s 1.25.4 bootstrap cluster. Has it something to do with service accounts missing secrets, thus kubeconfig being not valid anymore? I think this happens since 1.24+
Same error on k8s 1.23.5 bootstrap cluster unfortunately.
@danieljkemp Okay, the error is that the wrong bootstrap-kubeconfig created. I have tried with the regular kubeconfig copied to the master node (k3s.yaml) on the bootstrap cluster and this is working.
kubectl get byoh -A
NAMESPACE NAME OSNAME OSIMAGE ARCH
default tanzu-master-0 linux Ubuntu 20.04.5 LTS amd64
I had to install iptables on the master and worker nodes too and now my cluster is up and running!
I have tried with the regular kubeconfig copied to the master node (k3s.yaml) on the bootstrap cluster and this is working.
Well, this will beat the purpose of having a bootstrap-kubeconfig
. The idea is to share a kubeconfig
that has restricted access. The regular one probably has admin level privileges.
@anusha94 Kubeconfig creation way changed with latest k8s versions. I agree, this shouldn't expose admin access, but if one uses a Role with restricted access in this script it will work.
export LOGIN_USER=bootstrapuser
kubectl -n kube-system create serviceaccount $LOGIN_USER
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $LOGIN_USER
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "$LOGIN_USER"
EOF
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: $LOGIN_USER
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: $LOGIN_USER
namespace: kube-system
EOF
kubectl -n kube-system get secret -o yaml $LOGIN_USER
export USER_TOKEN_NAME=$(kubectl -n kube-system get secret $LOGIN_USER -o=jsonpath='{.metadata.name}')
export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/${USER_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode)
export CURRENT_CONTEXT=$(kubectl config current-context)
export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}')
export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
cat << EOF > $LOGIN_USER-config
apiVersion: v1
kind: Config
current-context: ${CURRENT_CONTEXT}
contexts:
- name: ${CURRENT_CONTEXT}
context:
cluster: ${CURRENT_CONTEXT}
user: $LOGIN_USER
namespace: kube-system
clusters:
- name: ${CURRENT_CONTEXT}
cluster:
certificate-authority-data: ${CLUSTER_CA}
server: ${CLUSTER_SERVER}
users:
- name: $LOGIN_USER
user:
token: ${USER_TOKEN_VALUE}
EOF
kubectl --kubeconfig $(pwd)/$LOGIN_USER-config get all --all-namespaces
```sh
same issue here
hit the same issue with k8s 1.27.2 with --skip-installation
flag. The bootstrap user is clusteradmin role. apparently it will have no restrict access.
What steps did you take and what happened: [A clear and concise description of what the bug is.]
WHen running the BYOH agent on the new node, I am getting the following error
What did you expect to happen: No errors, and the node visible in
kubectl get byohosts
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version --short
): 1.24/etc/os-release
): Ubuntu 20.04.5 LTS (Focal Fossa)