cloud-bulldozer / e2e-benchmarking

Performance Tests for end Platforms
Apache License 2.0
40 stars 74 forks source link

router-perf-v2 router tune_liveness_probe can't work on single worker node because of anti-affinity rule #220

Closed qiliRedHat closed 3 years ago

qiliRedHat commented 3 years ago

Problem: The target cluster has a single worker node and only one router. I set NUMBER_OF_ROUTERS=1, NODE_SELECTOR={node-role.kubernetes.io/worker: }, the router-perf-v2 can't work. Error logs:

08-23 16:30:30.977  23-08-2021T08:30:30 Scaling number of routers to 1
08-23 16:30:31.243  deployment.apps/router-default scaled
08-23 16:30:31.526  Waiting for deployment "router-default" rollout to finish: 1 old replicas are pending termination...
08-23 16:40:40.639  error: deployment "router-default" exceeded its progress deadline

Analysis of the cause:

After running this line of code https://github.com/cloud-bulldozer/e2e-benchmarking/blob/9da00a2f270cffe5e3314360391656ef6d2f46cb/workloads/router-perf-v2/common.sh#L61

A new replica set router-default-d9888dff8 is created to make the change to a new pod.

 % oc get pods -n openshift-ingress                                    
NAME                              READY   STATUS    RESTARTS   AGE
router-default-5844bb8f66-jhxph   1/1     Running   0          3h27m
router-default-d9888dff8-pb4kg    0/1     Pending   0          11m

While because of the anti-affinity rule, the new pod can not be scheduled.

% oc describe pod router-default-d9888dff8-pb4kg -n openshift-ingress
...
Controlled By:        ReplicaSet/router-default-d9888dff8
...
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  18s   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

Then when the following code is run https://github.com/cloud-bulldozer/e2e-benchmarking/blob/9da00a2f270cffe5e3314360391656ef6d2f46cb/workloads/router-perf-v2/common.sh#L63-L64

error happens

% oc scale --replicas=1 -n openshift-ingress deploy/router-default
deployment.apps/router-default scaled
% oc rollout status -n openshift-ingress deploy/router-default
error: deployment "router-default" exceeded its progress deadline

Scale up trys to work on the replica set router-default-d9888dff8, which is not READY.

% oc describe -n openshift-ingress deploy/router-default
...
OldReplicaSets:  router-default-5844bb8f66 (1/1 replicas created), router-default-d9888dff8 (1/1 replicas created)

...
  Normal  ScalingReplicaSet  22m (x5 over 128m)  deployment-controller  Scaled up replica set router-default-d9888dff8 to 1
 % oc get rs -n openshift-ingress
NAME                        DESIRED   CURRENT   READY   AGE
router-default-5844bb8f66   1         1         1       3h44m
router-default-d9888dff8    1         1         0       134m

Proposal: To make the router-perf-v2 work on single worker node cluster. One proposal could be adding a logic when NUMBER_OF_ROUTERS is set to -1, the tune_liveness_probe and enable_ingress_operator functions are disabled.

qiliRedHat commented 3 years ago

@rsevilla87 Please take a look at the proposal and let me know your thought, if you agree, I can open a PR.

kedark3 commented 3 years ago

I will let @rsevilla87 make the final call on proposed solution, but one of the ways I tackled this issue elsewhere was to scale down the replica to 0 explicitly, then make the deployment update, and scale it back up. So before running the line, oc set probe -n openshift-ingress --liveness --period-seconds=$((RUNTIME * 2)) deploy/router-default

you would run oc scale -n openshift-ingress deploy/router-default --replicas 0

and then run oc scale -n openshift-ingress deploy/router-default --replicas 1

so together it would look like:

oc scale -n openshift-ingress deploy/router-default --replicas 0
oc set probe -n openshift-ingress --liveness --period-seconds=$((RUNTIME * 2)) deploy/router-default
oc scale -n openshift-ingress deploy/router-default --replicas 1
rsevilla87 commented 3 years ago

Thanks for reporting this @qiliRedHat , this is a corner case I didn't consider when I coded this benchmark. I like @kedark3 , simple and effective.