AbdelrhmanHamouda / locust-k8s-operator

Deploy and manage Locust tests on Kubernetes.
https://abdelrhmanhamouda.github.io/locust-k8s-operator/
Apache License 2.0
50 stars 11 forks source link

[BUG] Operator doesn't manage the metric exporter sidecar #72

Open eduardchernomaz opened 1 year ago

eduardchernomaz commented 1 year ago

Describe the bug After the CRD has been applied and the test has completed running for the duration specified, the process fails. The worker job completes, while the master job continues to run.

To Reproduce Steps to reproduce the behavior:

  1. Apply the LocustTest manifest to start the test
  2. Once the test runs for specified duration, list available jobs and pods
  3. You should see that the worker job and worker pods have been completed. However, the master job has not been completed and the pod is in a NotReady state.

Expected behavior Once the test has completed, both the worker and the master pods should be in a Completed state and eventually removed.

Screenshots Pods status after the test has completed.

image

Jobs status after the test has completed.

image

Additional context I suspect that the problem is that on the master node, the locust-metrics-exporter container never stop and continues to run. Failing to signal job completion.

AbdelrhmanHamouda commented 1 year ago

Hello @eduardchernomaz, your assessment of the root cause is on point. Indeed this behavior is because of the metrics exporter. This is a known issue that i am aware and intend on solving.

Problem explanation: The exporter is a sidecar container that the locust native image is not aware of its existence. Also Kubernetes doesn't provide native support for sidecars container behavior definition e.g. shutdown after container x exits. This is important to understand because when I solve this for the metrics exporter container, the same issue will happen if your organisation have a cluster configuration to inject other sidecars e.g. istio sidecar.

In all cases, till I push the fix, I have provided 2 simple workarounds to employ in the meanwhile.

Workaround:

AbdelrhmanHamouda commented 1 year ago

More info on the proposed fix (still under investigation), The proposed idea is to extend the Operator operation to include container management as a secondary resource to the custom resource. Meaning that after the Operator creates the main cluster resources, it need to register a reconciler to manage the secondary resources created by k8s itself. Doing so, the Operator can start reacting to to events coming from specific containers within specific pods.

From a security perspective, I also need to investigate any additional (if any) cluster privileges that maybe required by such solution.

eduardchernomaz commented 1 year ago

I wonder if we can also just add a PreStop container hook to the master deployment which would call the /quitquitquit endpoint.

AbdelrhmanHamouda commented 1 year ago

it is an interesting idea. One that make a lot of sense. I will dig into that and see what is needed to have this put in place.

AbdelrhmanHamouda commented 1 year ago

After some investigation, PreStop hook won't fit this use case. According to the documentation, it only gets invoked if the container termination is triggered from outside and not in cases where containers exit gracefully because of there internal process.

I am moving the investigation to assess if the liveness probe can be used to secure the desired effect without marking the job as "error"

AbdelrhmanHamouda commented 1 year ago

Possible implementation approach, Change metrics_exporter Liveness probes to ping the locust container every 10 seconds and on failure send a curl to quitequitequite.

AbdelrhmanHamouda commented 1 year ago

This will be solved with the fix for #50