Closed fionera closed 1 year ago
The following was observed on production:
In other words:
What we expected to happen:
All nodes that have the KW role and are approved should have appeared in kubectl at step 3.
What we observed:
Nodes got added to kubectl only after there were no more nodes with both KW and Unapproved.
Interpretation:
Having nodes with KW and Unapproved might block some logic from making progress in adding nodes to kubectl.
I'm not able to replicate this in a test.
This is the scenario I'm using:
@fionera Does this match what you've seen in prod, or am I misunderstanding the report?
I also tried swapping steps 2 and 3 and that also works.
@fionera Have you observed this behaviour during the newest cluster re-deployments?
Closing this as this wasn't able to be replicated and didn't happen in production again.
When setting up a new cluster you can currently add the KubernetesWorker role to a Node that is still in New state. That will prevent other nodes to join the k8s cluster. Approving these new Nodes will fix it.
Solution: block adding roles to nodes in new state.