This repo provides a collaboration framework for operating cloudfoundry and services at scale.
The goal is to automate most (if not all) interactive operations of Bosh/CF API/Iaas APIs through concourse pipelines, while keeping volume of boilerplate code low, and limit skills prereqs required to contribute services automation.
COA framework strives to limit skills prereqs for each persona, which the following table summarizes
Persona | Skills |
---|---|
service operator | git concourse concepts (UI usage, no authoring as pipelines are generated) |
template author | git bosh terraform (*) cf app manifest (*) concourse-pipelines (*) k8s (*) |
COA framework developer | ruby concourse shell bosh terraform git |
(*) optional depending on contributed templates
COA takes templates and configurations as inputs, and generates concourse pipelines as outputs. Generated templates automatically reload and execute. As a result, resources (Iaas, Bosh, CF) gets provisioned and operated through pipelines.
Diagram inspired from Dreftymac
A root deployment
contains infrastructure to operate nested deployment
s.
This section provides an overview of the deployment topology and bootstrapping process used by the Orange CloudFoundy skill center team.
.
Nb: source is in the plantuml file: bosh overview, see caching tips
The inception
, micro-depls
, master-depls
, ops-depls
, expe-depls
are root deployment
s in Orange CF skill center infrastructure (each associated with a dedicated bosh director)
The nested deployment model enables a split of responsibility as the operations team scales.
The plan is to open source the Orange's CF skill center team template git repo in the near future (once the remaining secrets get cleaned up), watch paas-templates repo for incoming commits.
This sections describes the pipelines that COA generates and loads into concourse. Some are singletons while others are templated and instanciated for each root deployment.
The following diagram illustrates the sequence of pipeline generation and loading
.
This section details the format supported by the templating engine in both the template repo and the secrets repo.
Using this feature for a deployment to aggregate several operation to multiple "deployer". Currently we support:
There is no dependencies between deployer, it is processed independently.
Files included in template
dir, in a deployment, are used by bosh-deployer
Files included in concourse-pipeline-config
dir, in a deployment, are used by concourse-deployer
The base concourse file name should named like the directory and end with .yml
By default, this pipeline is not enabled, you need to activate it.
The diagram below illustrates the concourse pipeline generation for two types of supported resources (Bosh deployments and CF apps). The diagram includes the main hooks that templating engine supports during the resources life cycle.
.
For each boshrelease, when an enable-deployment.yml
file is found in the secrets repo, it is going to spruce all template files in the corresponding template repo dir
(template files need to end with -tpl.yml
extension).
If a template directory contains hook scripts with specific name, then these scripts get executed in the following order :
1: post-generate.sh
: can execute shell operation or spruce task.
Restrictions: as the post-generation script is executed in the same docker image running spruce, no spiff is available.
Environment variables available:
* GENERATE_DIR: directory holding generated files. It's an absolute path.
* BASE_TEMPLATE_DIR: directory where `post-generate.sh` is located. It's an relative path.
2: pre-deploy.sh
: can execute shell operation (bosh, credhub, cf and spruce).
Legacy support: script named pre-bosh-deploy.sh
are still supported
Environment variables available:
* GENERATE_DIR: directory holding generated files. It's an absolute path.
* BASE_TEMPLATE_DIR: directory where `pre-deploy.sh` is located. It's an relative path.
* SECRETS_DIR: directory holding secrets related to current deployment. It's an relative path.
3: post-deploy.sh
: can execute shell operation (including curl).
Legacy support: script named post-bosh-deploy.sh
are still supported
Environment variables available:
* GENERATE_DIR: directory holding generated files. It's an absolute path.
* BASE_TEMPLATE_DIR: directory where `post-deploy.sh` is located. It's an relative path.
* SECRETS_DIR: directory holding secrets related to current deployment. It's an relative path.
deployment-dependencies.yml
file, insert a key errands
with a subkey named like the errand job to executeManifest generation supports dedicated part to
Add directories (like openstack, cloudstack, etc...) for each specific iaas, in the template directory.
Set a iaas-type
credential in secrets repo to match the directory name.
As all spec subdirectories are processed by terraform, it is not possible to use the same convention. So to support
iaas-type
with terraform, a directory called spec-<iaas-type>
is required.
Other terraform mechanisms applies.
The newest bosh feature are not implemented in bosh cli v1. So some feature are only available to deployments using bosh cli v2. This can be combined with iaas specifics support
By convention, all files in template dir matching *-operators.yml
are used by bosh-deployment
as ops-files
inputs. Theses files are not processed by spruce.
By convention, all files in template dir matching *-vars-tpl.yml
are processed by spruce and generate *-vars.yml
files.
As spruce is no longer required, it is also possible to include vars files directly, files matching *-vars.yml
are used by bosh-deployment
but ignored by spruce.
Theses files are used by bosh-deployment
as vars-files
inputs.
Warning: if there is a naming conflict between *-vars-tpl.yml
and *-vars.yml
, the tpl
wins !
Rules for ops-files and vars-files here. To support operators and vars files for cloud and runtime config, we have to define addition convention, as there are in the same directory.
*cloud-operators.yml
: operators for cloud-config*cloud-vars.yml
: vars for cloud-config*runtime-operators.yml
: operators for runtime-config*runtime-vars.yml
: vars for runtime-configMigration v1 to v2 tips: empty vars-files and ops-files are generated to avoid an error message
By default, git submodules are not checked out (this can be very time consuming). But some bosh releases require these submodule. There is a mechanism to detect submodule for a release and include it only for this bosh release
this is expected to be an empty yaml file !
In deployment-dependencies.yml
, it is possible to:
resources:
secrets:
extented_scan_path: ["ops-depls/cloudfoundry", "...."]
Following is a deployment-dependencies.yml
sample (should be placed in the boshrelease deployment dir):
---
deployment:
bosh-deployment: # or micro-bosh:
releases:
route-registrar-boshrelease:
base_location: https://bosh.io/d/github.com/
repository: cloudfoundry-community/route-registrar-boshrelease
shield:
base_location: https://bosh.io/d/github.com/
repository: starkandwayne/shield-boshrelease
xxx_boshrelease:
base_location: https://bosh.io/d/github.com/
repository: xxx/yyyy
errands:
smoke_tests:
Staring with COA 2.2, it is possible to use
a generic key (bosh-deployment
) or <deployment-name>
(ie: the directory name) under deployment
For each cf-application, when a enable-cf-app.yml
file is found, it is going to spruce all files in the template dir ending with -tpl.yml
If a template directory contains a pre-cf-push.sh
file, then this script is run:
Environment variables available:
pre-cf-push.sh
is located. It's an relative path.It is also possible to use a post-deploy.sh
, it is like a 'post-cf-push' script with an inconsistent name (we reuse the same concourse task)...
To interact easily with cloudfoundry, previously listed environment variables are available.
enable-cf-app.yml
file format
---
cf-app:
probe-apps-domains:
cf_api_url:
cf_username:
cf_password:
cf_organization:
cf_space:
If a ci_deployments descriptor (i.e. a file called ci-deployment-overview.yml
) is detected in secrets_dir/<depls>
, then an
auto-update job is generated.
By default all pipelines deploy into main
team. But it is possible to add a team
key to specify another team. See File format bellow.
WARNING: bootstrap
or *-init
pipelines must belong to main
team
Pre-requisite: team to deploy must exist
ci-deployment-overview.yml
may include a terraform_config
key to generate a terraform pipeline.The terraform_config
key
must include a state_file_path
key to indicate tfstate file path. It assumes that a spec dir is also included alongside
the tfstate file.
---
ci-deployment:
ops-depls:
target_name: concourse-ops
terraform_config:
state_file_path: ops-depls/tf-config-dir
pipelines:
ops-depls-generated:
team: bootstrap #optional - Default: main
config_file: xxxx/pipelines/ops-depls-generated.yml
vars_files:
- xxx/pipelines/credentials-ops-depls-pipeline.yml
- xxx/root-deployment.yml
ops-depls-cf-apps-generated:
config_file: xxx/pipelines/ops-depls-cf-apps-generated.yml
vars_files:
- xxx/pipelines/credentials-ops-depls-pipeline.yml
- xxx/root-deployment.yml
It scans secrets/<root_deployment>
for directories. If a enable-deployment.yml
is found, deployment status is set
to enabled
, otherwise to disabled
.
disabled
deployment are going to be candidates for deletion.
NYI
we provide a new config mechanism shared across all root deployments. A default will be provided by cf-ops-automation,
but it is possible to override these values with a shared-config.yml
file located in paas-template root directory. It
also possible to override again with a private-config.yml
file located in secrets root directory.
In /docs/reference_dataset, you find a set of Markdown files describing structure examples for the repos, links to example files as well as the lists of credentials needed by the generated pipelines to be deployed.
Those files are generated automatically following the specs given in features/.
In order to be portable across multiple infrastructures, and allow for centralized stemcell upgrades, deployment authors rely on COA to manage stemcells including offline replication, uploading to bosh director and purge, and stemcell version selection at deployment time. Deployment authors therefore use the following syntax in their deployment manifests:
stemcells:
- alias: trusty
os: ubuntu-trusty
version: latest
At deployment time, the deployment manifest is transformed by COA to load the expected stemcell version (say 3586.25
).
The same stemcell generation (currrently ubuntu-trusty
) is used for all deployments as defined by authors in
shared-config.yml
or overloaded by operators in private-config.yml.
The exact stemcell version (say 3586.25
) is used within a root-deployment as defined by authors in
root-deployment.yml
See github issues.
Prereqs:
gem install bundler
bundle install --path vendor/bundle
If you are running the full test suite, some of the integration tests are dependent on the fly CLI.
To login to the Fly CLI and target the cf-ops-automation CI:
fly -t cf-ops-automation login
You will be prompted to select either the Github, UAA or Basic Auth authentication methods.
After these are set up, you will be able to run the test suite via:
bundler exec rspec
While developing new pipelines, it might be easier to generate them locally and upload them manually to a concourse instance
fly -t preprod login -u login -p password -c concourse-url
./scripts/generate-depls.rb --depls cloudflare-depls -t ../paas-template/ -p ../bosh-cloudwatt-preprod-secrets/ --no-dump -i ./concourse/pipelines/template/tf-pipeline.yml.erb --iaas openstack
SECRETS=../bosh-cloudwatt-preprod-secrets/ TARGET_NAME=preprod ./scripts/concourse-manual-pipelines-update.rb -dcloudflare-depls
Once pipelines are correct, commit, pipelines would perform automated deployment, see scripts/concourse-generate-all-pipelines.sh
In order to leverage IDE capabilities for terraform when editing TF config files (completion, syntax highlighting, etc.) , a consistent local environment is required, with spruce templates interpolated, and tf config files merged from template and secrets repo.
This also enables local execution for terraform plan
and tf apply
providing shorter feedback cycles.
The scripts/setUpTfDevEnv.sh
script partially automates the set up of such local environment:
source scripts/setUpTfDevEnv.sh
develop
branch. Only Pull Requests based on this branch trigger automated builds.Use cf-ops-automation pipeline to perform a release. You may need to bump the version using one of the following jobs: major
, minor
or patch
.
Once the version is ok, simply launch ship-it
job
This type of release requires manual work.
hotfix.version
: add the expected release version. Format is <major>.<minor>.<patch>
, e.g. 1.1.0 or 1.7.2hotfix_release_notes.md
: the release note to publish on githubrun-tests-for-hotfix-branch
is successfulship-hotfix
to publish the release on githubIn order to quickly create an environment in which you can use the COA engine,
you can use the "bootstrap_coa_env.rb" script. By running
ruby scripts/bootstrap_coa_env.rb /path/to/prereqs1.yml /path/to/prereqs2.yml ... /path/to/prereqsn.yml
where the prereqs YAML are files containing configuration information for the
bootstrapping, pipelines will be created from the reference dataset data.
The prerequisites YAML files are expected to contain some information that will help the script to build the environment. You can write it all in a single file or in multiple files. An example file can be found at /lib/coa_env_bootstrapper/prereqs.example.yml.
It can contain up to 8 main keys:
Once the script is done running, it displays information about how to connect to
the Concourse it has installed. If you wish to display those information, you
can run bucc info
.
If you're using VirtualBox as a IaaS on OS X, you may have trouble connectiong
to the VMs installed by BUCC's BOSH. For instance, when the script is trying to
push the config repository to the Git server it had installed. In this case, run
the bucc routes
command to create the proper routes and enable communication
to the VMs.
Some stemcells are very large and here we're downloading it manually which can take a lot of time if the script is downloading it from the internet. This can lead to some timeouts. To prevent this, you can manually upload the stemcell to the BOSH Director and desctivate the upload_stemcell step.
If you're observing a Concourse error saying
pq: insert or update on table "worker_resource_config_check_sessions" violates foreign key constraint "worker_resource_config_check__resource_config_check_sessio_fkey"
,
it should resolve itself in a matter of seconds.
There is another error where GitHub resources as well as Docker images won't load. In this case, it was sufficient to restart the VirtualBox image.
run ./init-template.sh
, and it creates empty placeholder.
deployment-dependencies.yml
sample:
---
deployment:
bosh-deployment: #or micro-bosh:
releases:
route-registrar-boshrelease:
base_location: https://bosh.io/d/github.com/
repository: cloudfoundry-community/route-registrar-boshrelease
shield:
base_location: https://bosh.io/d/github.com/
repository: starkandwayne/shield-boshrelease
xxx_boshrelease:
base_location: https://bosh.io/d/github.com/
repository: xxx/yyyy
errands:
smoke_tests:
use deploy.sh
script like this to manually upload release.
deploy.sh
use bosh cli v2 syntax.
You can use spruce embedded with post-generate.sh to do it ! See post-generate.sh script t
There is no yet public template sample. Orange employees can have a look to the post-generate.sh in the private paas-template repo.
simply run concourse-bootstrap.sh with the appropriate environment variable set. It loads the
bootstrap-all-init-pipelines
pipeline and triggers it.
This script must also be run when git is updated (branch or url)
SECRETS=<path_to_your_secret_dir> FLY_TARGET=<your_target> ./concourse-bootstrap.sh
The following tools are required to run concourse-bootstrap.sh
To setup a new paas-template repo, a new secrets repo or to add a new root deployment, you can run create-root-depls script to create empty files.
The following tools are required to run create-root-depls
This repo was inspired by great work shared in:
See CHANGELOG.md
Look into upgrade directory, and run required scripts from Cf-Ops-Automation root directory to benefit from default values
The following tools are required to run upgrade scripts:
It is also required to have a paas-templates repository clone and/or a config repository clone to be able to perform upgrade operations.