1) A new dedicated cluster with single node to be created to run this tests.
2) Daily pipeline to run tests for all the modules, which apply destroy them when tests are completed.
Same as our apply pipeline, a pipeline runs every day at midnight(some schedule agreed by team), applies all the latest version modules on some cluster and destroys them.
Create a namespace
Create all modules.tf (ex: s3.tf, ecr.tf) files.
In every module.tf file, use all the variables we defined in the module, this will help us to figure out any failures.
Save all the outputs we define in the module as secrets in the namespace.
Maybe to work out individual modules as a separate task, to figure out all the possibilities we can try in that module. It may be creating single or multiple, depending on the type of module.
Apply them overnight through the pipeline and destroy them.
This module.tf files need to be updated whenever there are changes to the existing module, or when a new module is created.
3) Create a separate pipeline for every module, which should trigger when ever the changes merged to the master in the module repo. (these pipelines should be grouped together.
module changes pipelines:
Same as our namespace changes pipeline, a pipeline runs whenever there is a change in any of the modules master branch (before creating a release with new module version).
Create a separate pipeline for every module and group them as we do for divergence/reporting.
This individual module pipeline should run only that module.tf file, but using the master branch. This will test the new changes not affecting the existing one.
We can use the same module.tf files we create above. Concourse pipeline triggers the pipeline when changes merge to master repo, and use the code and data from environments repo(or wherever we decide to maintain it) , change the module version to master and run it.
This will show if the new changes are ok, also shows if the new changes will cause to recreate the existing module.
Once happy we can create a new release, and update the module.tf test data files with new changes.
This Epic came out of firebreak spike. "https://github.com/ministryofjustice/cloud-platform/issues/1838"
Document from the spike: "https://docs.google.com/document/d/1UgXM0uFyB9tJvZv-G5XQ0u1xiNVD2zZXfS9gmPKjwh8/edit"
Tasks to achieve this:
1) A new dedicated cluster with single node to be created to run this tests.
2) Daily pipeline to run tests for all the modules, which apply destroy them when tests are completed.
Same as our apply pipeline, a pipeline runs every day at midnight(some schedule agreed by team), applies all the latest version modules on some cluster and destroys them.
3) Create a separate pipeline for every module, which should trigger when ever the changes merged to the master in the module repo. (these pipelines should be grouped together.
module changes pipelines:
Same as our namespace changes pipeline, a pipeline runs whenever there is a change in any of the modules master branch (before creating a release with new module version).
Where do we keep these tests/data/code?
Infrastructure repo Environments repo Individual module repo .
Ideal to keep them in the individual module repo.
Spike references:
https://github.com/ministryofjustice/cloud-platform-environments/tree/module-test/namespaces/mogaal.cloud-platform.service.justice.gov.uk
https://github.com/ministryofjustice/cloud-platform-concourse/blob/module-test-pipelines/pipelines/mogaal/main/moduletest.yaml