Our staging jenkins jobs (e.g. anbox) do the following:
Build the image from the master branch of the project repo
Generate a tag for the image
Push that image to the prod-comms.docker-registry.canonical.com registry
Update the image tag on the Kubernetes deployment, so k8s will then pull down the image we just built and run it for the staging site.
When there's an error in the app (e.g. because the SECRET_KEY env var isn't defined), the way it will manifest in the Jenkins output is that it will keep saying e.g. "Waiting for deployment "anbox-cloud-io" rollout to finish: 2 out of 3 new replicas have been updated...". This is not ideal.
What we want is to see instead is an error message from the pods to show the build has failed. This could probably be achieved through a combination of:
Ensuring the rollout times out after a not-too-long time - or fails immediately when pods fail in a specific way
Inspecting kubectl rollout status to see which pods have errored
Displaying the output of kubectl logs pod/{failing-pod} | tail to expose the error message
Our staging jenkins jobs (e.g. anbox) do the following:
prod-comms.docker-registry.canonical.com
registryWhen there's an error in the app (e.g. because the
SECRET_KEY
env var isn't defined), the way it will manifest in the Jenkins output is that it will keep saying e.g. "Waiting for deployment "anbox-cloud-io" rollout to finish: 2 out of 3 new replicas have been updated...". This is not ideal.What we want is to see instead is an error message from the pods to show the build has failed. This could probably be achieved through a combination of:
kubectl rollout status
to see which pods have erroredkubectl logs pod/{failing-pod} | tail
to expose the error message