We've noticed some behavior on the nightly ci pipelines where one failed parallel job was causing other in-progress jobs in the same workflow run to non gracefully terminate as the cancellation timeout would be met. This led to inconsistent behavior with subsequent workflow runs involving terraform where pre-existing state files were locked. Workflows are continually failing until a new commit is made, which forces the workflow to generate a new state key. This would circumvent the issue, but consequentially leave behind orphaned resources.
The intent of this PR is to ensure that all failed jobs gracefully exit and do not impact the status of other jobs that are running in the same workflow. This will add additional time to workflow runs, but it will always ensure that resources are properly cleaned up and that processes gracefully terminate before the pipeline completing.
...
Description
We've noticed some behavior on the nightly ci pipelines where one failed parallel job was causing other in-progress jobs in the same workflow run to non gracefully terminate as the cancellation timeout would be met. This led to inconsistent behavior with subsequent workflow runs involving terraform where pre-existing state files were locked. Workflows are continually failing until a new commit is made, which forces the workflow to generate a new state key. This would circumvent the issue, but consequentially leave behind orphaned resources.
The intent of this PR is to ensure that all failed jobs gracefully exit and do not impact the status of other jobs that are running in the same workflow. This will add additional time to workflow runs, but it will always ensure that resources are properly cleaned up and that processes gracefully terminate before the pipeline completing. ...
Related Issue
Example workflow run: https://github.com/defenseunicorns/uds-core/actions/runs/11747339383/job/32731009925?pr=989#step:9:63
Type of change
Checklist before merging