Zooz / predator

A powerful open-source platform for load testing APIs.
https://zooz.github.io/predator/
Apache License 2.0
576 stars 109 forks source link

Kubernetes: Allow to specify NodeSelector and/or Affinity for Runners #298

Closed roychri closed 4 years ago

roychri commented 4 years ago

Yesterday I've installed Predator for the first time and launched my first test. Good job on turning Artillery into distributed load test that runs as kubernetes Jobs! :heart:

I wanted to test the HPA for some of my application to make sure that HPA was setup right and see how much pounding they could take.

In order to make sure the tests were not interferring with the application being tested, I've created a new GKE node pool with a specific label purpose:loadtesting and I've configured my app to never be scheduled on one of those nodes (using affinity).

Here is how I created the node pool in GKE:

    gcloud container node-pools create ${POOL_NAME} \
           --cluster ${CLUSTER} \
           --enable-autoscaling \
           --machine-type n1-standard-4 \
           --max-nodes 12 \
           --min-nodes 3 \
           --node-version 1.15.11-gke.5 \
           --num-nodes 4 \
           --scopes gke-default,logging-write,monitoring \
           --zone ${ZONE} \
           --node-labels=purpose=loadtesting

Then I wanted the tests (runner) to run ONLY on nodes which had the label purpose:loadtesting.

So that's my suggestion, being able to force (using nodeSelector for example) the runner pods on specific nodes (or prevent them to run on some nodes using affinity).

I was able to install the predator helm chart and specify that I wanted that to run in my loadtesting nodes by using:

helm install predator zooz/predator --set nodeSelector.purpose=loadtesting

and that worked just fine. But that is not having any impact on the runner.

So what I ended up doing is to clone your repo, and I made a small change to src/jobs/models/kubernetes/jobTemplate.js and now the runners runs on the nodes I want.

The diff looks like:

--- a/src/jobs/models/kubernetes/jobTemplate.js
+++ b/src/jobs/models/kubernetes/jobTemplate.js
@@ -30,6 +30,9 @@ module.exports.createJobRequest = (jobName, runId, parallelism, environmentVaria
                             'env': Object.keys(environmentVariables).map(environmentVariable => ({ name: environmentVariable, value: environmentVariables[environmentVariable] }))
                         }
                     ],
+                    'nodeSelector': {
+                        'purpose': 'loadtesting'
+                    },
                     'restartPolicy': 'Never'
                 }
             },

But this only works for my case. It would be nice if the helm chart values had a section called runners where I could specify things like nodeselector or affinity or tolerations which would be applied to the runner Jobs.

The alternative would be to change the UI to ask for these settings, but... since that only apply to kubernetes, I thought it made more sense to have this as part of the chart configurable values...

Let m know if you need any more details.

NivLipetz commented 4 years ago

Hi @roychri! Thanks for the very thorough feature request. We will release this in 1.4.0.

roychri commented 4 years ago

Do you know how you would implement this already? Do you need suggestions? Would a PR help?

enudler commented 4 years ago

I’m still thinking about it

As for today, we tried to keep everything in the configuration endpoint and not specifically in the helm chart/environment variables so configuration can be changed easily and dynamically without having to restart predator with the new environment variables.

This kind of configuration can be also relevant for DC/OS constraints(https://mesosphere.github.io/marathon/docs/constraints.html). As a starting point, however, k8s would be our main focus because noticing from the current predator userbase, it is mostly deployed in k8s clusters.

@roychri what do you have in mind? A PR would be very nice and is always much appreciated.

enudler commented 4 years ago

Hi @NivLipetz @roychri I would like to start working on that one.

Proposal: Add a configmap to predator helm chart where you can define the runner params: for example:

apiVersion: v1 kind: ConfigMap metadata: name: predator-runner-configmap data: template: |- { "spec": { "template": { "metadata": { "annotations": { "traffic.sidecar.istio.io/excludeOutboundPorts": "8060" } } } } }

Predator will merge the given template of the runner with its own current 'hardcoded' job template. this will give good flexibility for future uses and adjustments.

What do you think?

NivLipetz commented 4 years ago

I think it's a good idea - will let us give the runner customisable configurations without restrictions, just important to add validation on the converged template so the runners aren't starting with a corrupt job template

enudler commented 4 years ago

merged in #322 to master will be released as part of 1.4

example for use:

curl -X PUT \
  http://PREDATOR-API-URL/v1/config \
  -H 'Content-Type: application/json' \
  -d '{
    "custom_runner_definition": {
        "spec": {
            "template": {
                "spec": {
                    "containers": [{
                        "resources": {
                            "requests": {
                                "memory": "128Mi",
                                "cpu": "0.5"
                            },
                            "limits": {
                                "memory": "1024Mi",
                                "cpu": "1"
                            }
                        }
                    }]
                }
            }
        }
    }
}'
enudler commented 4 years ago

@roychri although the issue is closed I realized that I put a wrong example so example updated :)