Open BartNetJS opened 1 year ago
Hey @BartNetJS ! Thanks for opening this issue. It's a great idea to enable scheduling tests. We have this particular enhancement in our backlog. We'll get back to you on this once we discuss it in our next team meeting. 😄
Great @adnanrahic !
Do you have a workaround or idea how I can achieve the following today?
CICD into an environment (in my case azure AKS Dev, tst, acc and prd) Run tracetests as part of the CD As noted above, the github agent doesn't have direct access to the tracetest server. So one option is to deploy the tests via helm charts and get it executed. And report the result back
One idea is: Start a job as part of the helm install/upgrade:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-job
spec:
template:
spec:
containers:
- name: job
image: your-image
command: ["command-to-launch-tests"]
restartPolicy: Never
backoffLimit: 4
Wait for the job to complete:
kubectl wait --for=condition=complete job/my-job
Get the state of the job:
kubectl get job my-job -o json | jq -r '.status.conditions[] | select(.type=="Complete") | .status'
And read the logs:
kubectl logs job/my-job
And validate the result
Or do you have another idea?
That's a great option!
We have one complete example of running Tracetest tests in K8s with Testkube. It's a cloud-native test runner.
Here are all resources on that:
I'm currently working on an example using Tekton. I'm hoping to get that done next week!
Let me know if you need more guidance here. I'm happy to help.
Hi @BartNetJS. We have a similar issue for our own pipeline. We run the tests from the GH Actions runners, that don't have direct access to the development/integration cluster we use.
We work around the issue by leveraging kubectl port-forward
. In essence we open a tunnel to the cluster, and configure tracetest to talk to localhost:11633
.
The pipeline is this: https://github.com/kubeshop/tracetest/blob/main/.github/workflows/pull-request.yaml#L370.
You need to make sure you have a correctly configured kubectl
for this to work. We use gcloud, so we use the google-github-actions/get-gke-credentials
action to configure that. You'd need to tailor that to your usecase.
We also have this script that helps us make sure the port is ready before trying to connect. A simpler version can be found here
Once that is ready, you can use tracetest directly. Here's an example script that tights this together:
#!/bin/bash
# this is a variable so you can pass global params, like namespace, context, etc, if needed
KUBECTL="kubectl"
# make sure kubectl can talk to the cluster.
$KUBECTL get ns > /dev/null || exit 1
# configure the namespace and service name accordingly
$KUBECTL port-forward -n tracetest svc/tracetest 11633 & # this command blocks, so we send it to background
# since port-forward is in bg, we don't know if it's working.
# this loops forever, so adding a timeout is a good idea.
# see https://github.com/kubeshop/tracetest/blob/main/scripts/wait.sh
echo "Waiting for tunnel to be ready"
while ! nc -z localhost 11633; do
sleep 1
done
echo "Tunnel ready"
tracetest configure --endpoint http://localhost:11633
tracetest test run --definition somefile.yaml
Does this make sense?
That is a great solution @schoren ! I had to add -a false to tracetest configure --endpoint http://localhost:11633 otherwise it hangs endless with the question to enable analytics
@BartNetJS @schoren Was wondering if this issue is complete or if we have more to do on it?
I like to deploy tracetest's yaml files as part of a CD pipeline. Currently the tracetest server is already deployed by a helm chart via a github action. The github action dedicated agent doesn't have direct access to the tracetest server running in AKS. So tracetest configure -g --endpoint http://... is not an option to get the test files deployed to the tracetest server.