Open invidian opened 3 years ago
Hmm, this turns out to be caused by #669.
Applying this extra manifests and re-creating all pods make things work:
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: default
namespace: capi-system
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: default
namespace: capi-kubeadm-bootstrap-system
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: default
namespace: capi-kubeadm-control-plane-system
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: default
namespace: capi-webhook-system
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
name: default
namespace: cluster-api-provider-packet-system
I guess other than documentation there is not much we can do with that, except of course contributing and solving kubernetes-sigs/cluster-api#3836.
Right now running
clusterctl init --infrastructure packet
on fresh Lokomotive cluster on AWS ends up with the following situation:This is because of PSPs we ship and https://github.com/kubernetes-sigs/cluster-api/issues/3836.
PSPs can be workaround by applying the following manifests:
This makes pod spawn, but they are still crashing with the logs like: