Open eduardchernomaz opened 1 year ago
Hello @eduardchernomaz, your assessment of the root cause is on point. Indeed this behavior is because of the metrics exporter. This is a known issue that i am aware and intend on solving.
Problem explanation: The exporter is a sidecar container that the locust native image is not aware of its existence. Also Kubernetes doesn't provide native support for sidecars container behavior definition e.g. shutdown after container x exits. This is important to understand because when I solve this for the metrics exporter container, the same issue will happen if your organisation have a cluster configuration to inject other sidecars e.g. istio sidecar.
In all cases, till I push the fix, I have provided 2 simple workarounds to employ in the meanwhile.
Workaround:
/quitquitquit
endpoint on the exporter to exit. It would be something like this: curl -fsI -XPOST http://localhost:9646/quitquitquit
/quitquitquit
endpoint. In locust term this would translate to making the call in the function / method annotated with @events.quitting.add_listener
.More info on the proposed fix (still under investigation), The proposed idea is to extend the Operator operation to include container management as a secondary resource to the custom resource. Meaning that after the Operator creates the main cluster resources, it need to register a reconciler to manage the secondary resources created by k8s itself. Doing so, the Operator can start reacting to to events coming from specific containers within specific pods.
From a security perspective, I also need to investigate any additional (if any) cluster privileges that maybe required by such solution.
I wonder if we can also just add a PreStop
container hook to the master deployment which would call the /quitquitquit
endpoint.
it is an interesting idea. One that make a lot of sense. I will dig into that and see what is needed to have this put in place.
After some investigation, PreStop
hook won't fit this use case. According to the documentation, it only gets invoked if the container termination is triggered from outside and not in cases where containers exit gracefully because of there internal process.
I am moving the investigation to assess if the liveness probe can be used to secure the desired effect without marking the job as "error"
Possible implementation approach, Change metrics_exporter Liveness probes to ping the locust container every 10 seconds and on failure send a curl to quitequitequite.
This will be solved with the fix for #50
Describe the bug After the CRD has been applied and the test has completed running for the duration specified, the process fails. The worker job completes, while the master job continues to run.
To Reproduce Steps to reproduce the behavior:
LocustTest
manifest to start the testExpected behavior Once the test has completed, both the worker and the master pods should be in a
Completed
state and eventually removed.Screenshots Pods status after the test has completed.
Jobs status after the test has completed.
Additional context I suspect that the problem is that on the master node, the
locust-metrics-exporter
container never stop and continues to run. Failing to signal job completion.