Open jimmykarily opened 4 years ago
I would love to have all bash scripts that call catapult inside of the pipeline on a folder ala:
scripts
├── 00-recover-ekcp.sh
├── 10-deploy-kubecf.sh
├── 20-test-smokes.sh
├── 21-test-cats.sh
├── 22-test-brains.sh
├── 23-test-sits.sh
└── 30-upgrade-kubecf.sh
and then provide 1 script that loops and calls those in order. Basically a simple implementation of the concourse pipeline, but locally on a shell script.
those scripts/* can just be called on the concourse yaml at the same time.
Fair to me, but want to underline that approach has one disadvantage: it then ties the pipeline with a code checkout. If you look close at the steps of the kubecf pipelines are basically oneliners (deploy/test), I would second that approach in case scripts becomes "big chunks"
but want to underline that approach has one disadvantage: it then ties the pipeline with a code checkout
@mudler the pipeline as it exists on concourse will reflect the state of the template from the checkout from which it was deployed, so isn't this true either way?
I also like little shellscripts as they can be linted easily.
but want to underline that approach has one disadvantage: it then ties the pipeline with a code checkout
@mudler the pipeline as it exists on concourse will reflect the state of the template from the checkout from which it was deployed, so isn't this true either way?
well, not necessarly set-pipeline
is a fly
command, so you can always deploy different pipelines not tight with the code (then you can set whatever auto-deploy mechanism. But still concourse doesn't guarantee you that by default)
As we add steps to the Concourse pipeline it slowly becomes a very big and unmanageable yaml file. We even have bash scripts defined inside the yaml.
We should extract anything that can be extracted to smaller manageable files, whether is it a script of a task definition (yaml).