Closed taylorsilva closed 3 years ago
Side note: we should probably make use of the helm test
functionality to run at least the smoke tests when we lint the chart
@chenbh one thing I've been thinking about while writing these tests is: should the tests be very unit-y, as in testing each individual template and conditional in the template OR should they be more behavioural?
For example, for the fullnameOverride
param I could write this unit test for each template:
- it: fullnameOverride is used
set:
fullnameOverride: fullOverride
web.nameOverride: webOverride
asserts:
- equal:
path: metadata.name
value: fullOverride-webOverride
or I could write one that checks that fullnameOverride
is used correctly across all templates that it's used in.
Backfilling tests is already hard and I found it easier to write the template-specific unit tests first. I can see us repeating a lot of tests if we continue down this path though. I'm not sure what the best way to structure the tests is then if they're not template specific? Where would the fullnameOverride
test(s) live? In it's own test suite/file?
should the tests be very unit-y, as in testing each individual template and conditional in the template OR should they be more behavioural?
I would prefer behaviour, but the unit-y test does have a really nice organizational structure that doesn't pollute my eyes.
I've been thinking about this some more and I'm no longer sure about the need for a lot of the unit tests. A brief look at the commit history shows that the vast majority of PRs are about adding new values to values.yml. Going by the assumption that new fields doesn't really change old template values, the unit tests feel like it'll just be written once, but will never fail cause we never change that part of the template again.
We should have unit tests for large behaviour stuff like worker only deployments, external postgres, etc, but I feel like we probably don't need a unit test for every single env var configuration and other small details
We should have unit tests for large behaviour stuff I agree with this, it feels too tedious and rigid to continue with this first iteration. When I have some time I'm try writing some tests like that. This was a good first pass to get a feel for what it's like writing these kinds of tests.
Existing Issue
Fixes #174
Why do we need this PR?
We want to stop using topgun/k8s for testing the chart. This starts moving us towards that direction.
Changes proposed in this pull request
Add a folder called
unit-tests
that uses theunittest
helm plugin from https://github.com/quintush/helm-unittest. After installing the plugin (runmake install-unittest
) you can run the tests by runningmake test
in the root of this repo.The only unit tests are for the web-deployment and required-check templates. Looking for feedback on the tests written so far and see if this is something we want to keep building/working on.
This also fixes one bug where
web-deployment
was not renderingstrategy
correctly fromvalues.yam
.Contributor Checklist
README.md
~master
is for changes related to the current release of theconcourse/concourse:latest
image and should be good to publish immediatelydev
is for changes related to the next release of Concourse (aka unpublished code onmaster
in concourse/concourse)Reviewer Checklist