Open jackiu opened 3 years ago
What error did you experience?
Certainly adding a bit of explanation that waiting for node to come up is required could help. @jackiu would you mind creating a PR for that?
I will create a PR for that. If we don't wait the node to come up first, the two pods would be scheduled in the same node ( see output below ). So if we run the FIS experiment and happen to kill this "ip-10-0-3-27.us-east-2.compute.internal", the pod would be killed and needed to be rescheduled to another node.
➜ ~ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-kubernetes-ffd764cf9-9v55s 1/1 Running 0 45h 10.0.3.90 ip-10-0-3-27.us-east-2.compute.internal
Actually that would make a good "next steps" learning to set pod "affinity".
There isn't steps to wait for the 2nd node to be ready. The pod scaling should be done after the 2nd node is ready .
perhaps should wait for "kubectl get nodes" to get back 2 nodes before proceeding?
https://chaos-engineering.workshop.aws/en/030_basic_content/070_containers/020_eks/30-fix-repeat.html#increase-the-number-of-containers