Closed yulianedyalkova closed 3 weeks ago
Ok, I've been able to think about this and have a rough idea what it might look like...
mc-bootstrap
setup as there is work being done to replace it with a CLI, instead this should be complementary and not rely on either the current or new implementation of mc-bootstrapmanagementcluster-test-suite
to align with existing)cluster-test-suites
and apptest-framework
and be built on top of clustertest
. I suspect that only a single test suite will exist though rather than the per-provider (+ variants) we have for WCs. There might be a separate upgrade tests but that's currently out of scope I think.E2E_KUBECONFIG
env var pointing at the MC to test (the MC must already have been created)The test cases should make use of Spec Labels (or Label Sets) to indicate the following:
These labels will then be used to filter the tests that are run based on the MC that is being tested
mc-bootstrap
should be updated with a new Make target that triggers the test suite (from latest container image?) using the Kubeconfig of the created MC from the earlier step. mc-bootstrap
Tekton Pipeline should be updated with a new Task between the generate and cleanup that runs the new test suite.Here is an example on how the pipeline could be created. I put this together last Nov. https://github.com/giantswarm/mc-bootstrap/pull/721
Relevant from on-site discussion (taken from Slack thread):
To summarise:
We create a new management-cluster-test-suites
that runs the cluster-wide “validation” / integration tests against an MC created by mc-bootstrap. The kubeconfig will be passed through from mc-bootstrap and just run as part of the current pipeline before the teardown task.
For individual app testing we have a pool of (let’s say) 10 ephemeral MCs pre-configured that we can make use for with app test suites. We then have automation that ensures at least 1 MC is “pre-warmed” at start of business day that the first run of app tests can use. When an app test is triggered we (in the background) launch the next MC in the pool so it’s ready for the next tests. Because we have a limited number of available MCs to use we’d need to use a queue in case there are more than 10 test ru
I'm going to consider this initial issue completed and close this.
There's still some extra work to do after this issue (add all providers, fix failing tests) but I'm going to create individual issues to track those.
@yulianedyalkova do we have an issue for individual app tests on MCs yet?
Test failures:
Enforce tests once above are resolved: https://github.com/giantswarm/giantswarm/issues/31820
@AverageMarcus we do now - https://github.com/giantswarm/roadmap/issues/3708. I've already added our decisions in it.
✨ Perfect! ✨
Thank you :)
Amazing work
Currently we only ensure that mc-bootstrap runs successfully but we have no further validation of what happens with the apps installed after.
Acceptance criteria:
Task: