Closed vincepri closed 4 years ago
/assign @chuckha
/lifecycle active
Reproduced easily. It looks like everything works from the AWS console but the actual cluster does not have all the nodes that are created.
Using this YAML https://gist.github.com/chuckha/9d4ab8252709a426dbd493f603d9ac64
The issue is that we are trying to manage all the certs and we need three cases, control-plane init, control-plane join and worker join, but we only have 2 cases, control-plane init + worker.
They each have their own set of certificates they need and must be treated separately.
this is looking better :)
chuckh-a02:capi-dev cha$ export KUBECONFIG=my-target-cluster.conf
chuckh-a02:capi-dev cha$ k get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-196.us-west-2.compute.internal NotReady master 13m v1.16.0
ip-10-0-0-45.us-west-2.compute.internal NotReady master 2m22s v1.16.0
ip-10-0-0-84.us-west-2.compute.internal NotReady master 3m8s v1.16.0
Nice!
/kind bug
What steps did you take and what happened: Props to @dims awesome e2e testing coverage for GCP, we noticed that additional control plane machines never reach
running state
.