Kong / charts

Helm chart for Kong
Apache License 2.0
243 stars 475 forks source link

tests: introduce golden tests in kong chart #953

Closed czeslavo closed 8 months ago

czeslavo commented 9 months ago

What this PR does / why we need it:

Introduces golden test case that renders kong/kong chart templates using test ci/*-values.yaml and compares them with golden outputs stored in charts/kong/test/testdata/golden directory. The golden files can be regenerated with make test.golden.update.

It should allow us to easily inspect changes in the output manifests that we make by introducing changes to templating, default values, etc., and catch potential mistakes.

Which issue this PR fixes

Follow up to https://github.com/Kong/charts/pull/947.

Special notes for your reviewer:

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

rainest commented 9 months ago

Is there a way to have it run automatically and add the diff set (either as an actual commit or as a PR comment)? Or trigger it on demand?

KIC changes often don't result in a change to the end output Kong configuration, but changes to the templates almost always do unless they're outside test coverage. Refactors that aren't expected to result in changes (such as the consolidation of Services into a shared function) are more the exception than the rule.

czeslavo commented 9 months ago

Is there a way to have it run automatically and add the diff set (either as an actual commit or as a PR comment)? Or trigger it on demand?

Do you mean some workflow that would automatically commit changes when golden tests fail so that no one has to run make test.golden.update locally and push the changes? Probably it's doable, but will require some work.

I wouldn't like to block https://github.com/Kong/charts/pull/947 on it. ~If you agree I could try improving the process by adding the automatic commits in a separate PR.~ Let's just take a look at the diff I generated on this branch for #947 and not depend on it. I'll try to come up with something more developer-friendly.

pmalek commented 9 months ago

Do you mean some workflow that would automatically commit changes when golden tests fail so that no one has to run make test.golden.update locally and push the changes?

I would suggest us not try to automate this to an extent where no developer action is necessary to update the golden files because that IMHO defeats the purpose of those. Having a manual target like test.golden.update would hit the spot IMO because then you have to run it after performing changes that did change the test templates.

czeslavo commented 9 months ago

I would suggest us not try to automate this to an extent where no developer action is necessary to update the golden files because that IMHO defeats the purpose of those. Having a manual target like test.golden.update would hit the spot IMO because then you have to run it after performing changes that did change the test templates.

Yeah, that was the initial idea to make it work the same way it works in KIC. That is basically what this PR introduces. I understand that Travis' point is that it will be too much to remember to run make test.golden.update after every template change. I have invested no time in implementing automation around that so if we could agree it's fine to have it as it is I'd be more than happy. 😉

tao12345666333 commented 9 months ago

I agree with this approach, which can indeed help us move forward more safely.

I am considering another possibility, whether introducing tools like helm-unittest can achieve the same effect. https://github.com/helm-unittest/helm-unittest

pmalek commented 8 months ago

I agree with this approach, which can indeed help us move forward more safely.

I am considering another possibility, whether introducing tools like helm-unittest can achieve the same effect. https://github.com/helm-unittest/helm-unittest

This looks interesting 🤔 One major difference that I can see immediately between helm-unittest and this PR is that here we'd be diffing and asserting on the full contents of generated templates where in the former we'd only be testing what we define in our test cases. This has both pros and cons.

I'd love to hear @czeslavo's thoughts on this as he's the one that invested his time in proposing this framework.

czeslavo commented 8 months ago

I didn't consider helm-unittest back when implementing this. Indeed its Snapshot testing looks like something that we could use to achieve similar results. I'll give it a go and report back here.

czeslavo commented 8 months ago

I tried using helm-unittest to achieve the same mechanics as I proposed in this PR (test all ci/ values files by rendering templates for all of them and keeping golden files/snapshots), but it's not capable of doing it as of now. The main limitation I bumped into was the lack of ability to ignore dynamically generated content when comparing/storing snapshots which simply made the tests always fail because of generated passwords/certificates. Besides that, it also requires creating a YAML file with test definitions for every test suite while I think we would prefer to simply render all templates for a given set of values files like in the proposed solution.

I also discovered we're not the first to bump into the same limitations - there's helm-chartsnap tool that was created because of similar reasons (see motivations). I tried using that as well and it almost hits the spot for us. It allows ignoring dynamically generated data, but this has to be defined for each values file separately, which in our case would mean lots of duplication again. I created an issue that could potentially bridge this gap for us (https://github.com/jlandowner/helm-chartsnap/issues/30), but we either have to rely on the maintainer to implement this for us, or contribute ourselves.

I'd suggest going with the solution we have in this PR (it's not that much code in the end) and considering switching to helm-chartsnap once the issue I created is resolved, WDYT?

pmalek commented 8 months ago

That makes sense. Thanks for checking those @czeslavo! 🙇

In that case let's go with the approach proposed in this PR and see how it works for us in the long run.

czeslavo commented 8 months ago

In the meantime, our issue in the helm-chartsnap (https://github.com/jlandowner/helm-chartsnap/issues/30) was resolved. I posted an alternative PR that uses it instead of writing the Go-based test. PTAL: https://github.com/Kong/charts/pull/978

czeslavo commented 8 months ago

Closed in favor of https://github.com/Kong/charts/pull/978.