Open vkamra opened 6 years ago
If the jobs triggered in parallel aren't followed by other jobs in parallel before waiting for them all to complete, you can just add dependencyMode: strict
(documented here) to the job that should wait. It's more complicated if you need to run a series of jobs in parallel (so not all of the parallel jobs triggered are direct inputs to the last job).
@a-murphy we are using dependencyMode: strict
, but it has some flows.
For example, we have a series of Job A and strict dependent B and C. Right now B and C will be triggered only when the queue for A is done and the latest A run is successful. If the last A run is failed, B and C will not be triggered.
We are trying to find how to configure dependencies, so B and C would be triggered when the queue for A is done, and B and C will be triggered against the last successful A run, even if it's not the last job by order.
Job A: the last run failed, the last -1 failed, the last -2 is successful, B and C triggered against the last -2.
Only dependencyMode: immediate
will trigger the job when one of the inputs has failed. However, the only way to specify a version of an input that is not the latest is with pinned versions. Could you explain why you need to run with the last successful version of the input instead of the latest? Failed versions will already have the files copied from the successful version preceding them.
we have Job A which got executed against every commit and producing an image as result. And it's ok to have errors in that job. After that job we have 2 long running jobs to reduce amount of runs, they are dependencyMode: strict
. The idea is once engs stopped committing/merging. We want long-running job test successful image and promote into further.
Right now if the last run of Job A has failed, 2 long jobs won't be executed at all.
Since there's an image resource between the jobs, the subsequent job will typically use the image from the last successful run of the first job because the image resource will only be updated when the first job succeeds. The last version of the first job won't be used directly in the second.
However, the only way to trigger on failure right now is with dependencyMode: immediate
. Unfortunately, any attempt to trigger on failures right now has a risk of triggering the subsequent jobs more than once. If you are more concerned with triggering the later jobs when the last job failed and don't mind potential extra runs of the second jobs, you could put another runSh
job after the image resource that has the image resource as the only input, a sleep
command in the TASK
section to give any later runs a chance to finish, and copies the image to another image resource that is then used as the input to the jobs with dependencyMode: strict
. (You may also need to add something like the BUILD_NUMBER
to the image resource so that the resource is identified as updated.) This would then trigger something that dependencyMode: strict
can "see" and won't fail. The new middle job will be triggered every time the first runs and the later jobs when the middle job has stopped running. It would be slower overall, but able to still trigger the next jobs on failures without triggering them every time it runs.
Hi, We have multiple stages in our pipeline and in certain stages we'd like to run jobs in parallel - triggering the next stage only when all those jobs are successful.
It wasn't clear to me how we would represent the fan-out/fan-in of multiple jobs in Shippable.
Any pointers to samples or documentation here would be great.
-Vaibhav