Open jawher opened 9 years ago
Note that this proposed solution is a poor man's pipeline, as it doesn't support some advanced features of a real pipeline with stages like fanning-in (wait for multiple jobs before continuing).
What about removing pipelines from the yaml file ?
Each bazooka project would have its build configuration in the yaml file, but not the downstream/upstream dependencies. And we implement the pipelines in the Bazooka API.
With this solution, I think it would remove possible errors, such as
And it would also allow us to implement more complex pipelines, fanning-in, human validation...
For simplification, we could consider each downstream jobs gets the artifacts generated by its upstream job in /incoming.
A list of useful features for pipelining
We only need to start with sequential workflow (downstream/upstream) and create issues for the other features
A pipeline could be described in a yaml DSL. I will try to add a simple example soon
Just an idea
entry_point:
- name: api
bzk_project: api-unit # (optional) bazooka project name (if different of name)
triggered_by: scm # (optional) scm, manual... defaults to scm
triggers:
- api-perf
- api-it
wait_for:
- wait_jobs:
- api-perf
- api-it
triggers:
- deploy-api
Make it possible for a job to trigger other jobs after it's finished.
An example:
api
is a bazooka project for a REST server written in Javadeploy-api
deploysapi
to a tomcat serverapi-perf
is another project which benchmarks the performance ofapi
api-it
runs integration tests onapi
ui
is an angularjs frontendui-selenium
is the last project which tests theui
screensui
config:on success,
api
job triggersdeploy-api
andui
.deploy-api
config (which supposes the previous job artifacts are mounted on the/incoming
directory:etc.