cryostatio / cryostat-operator

A Kubernetes Operator to facilitate the setup and management of Cryostat.
https://cryostat.io
Apache License 2.0
33 stars 20 forks source link

[Epic] Cryostat3 test harness #709

Open mwangggg opened 10 months ago

mwangggg commented 10 months ago

Describe the feature

We want to create a test harness with common test cases to run against Cryostat deployment, with ability to deploy either 2.4 or 3.0 and with varying deployment parameters (report sidecar, other CR properties).

andrewazores commented 10 months ago

The existing scorecard test suite looks like it could be a candidate to fulfill the criteria, but there are a few things I wonder.

  1. The test cases seem to be hardcoded here: https://github.com/tthvo/cryostat-operator/blob/dbf4f5ef718e73556a4978e1bf33599fc3a81d8c/internal/images/custom-scorecard-tests/main.go#L106 . This works but seems annoying and error-prone to maintain if we want to make this into a test matrix where we can run the same tests against different CRs. Is there a cleaner way to define the test cases?
  2. The Operator will naturally evolve with the Cryostat version, so it will drop the ability to deploy 2.4 instances in favour of the ability to deploy 3.0 instances. I believe it's possible, but must confirm that it's also possible to specify the Operator version to install and which CRs to provide it, so that we can do things like install a 2.4 Operator and deploy Cryostat 2.4, run the suite of tests, then install a 3.0 Operator and Cryostat 3.0 and run the same tests.
  3. What are our options for conditionally enabling certain tests to run only against certain Cryostat versions? For the most part we are aiming for broad API compatibility, but there are a few intentional breaking API changes, so we need a nice way to run tests conditionally against versions, either to simply turn off tests or to provide alternate versions of close equivalent tests.
  4. Is this simply the wrong place to look at adding this test capability? This scorecard test suite will also be used for downstream product builds and will require a pass in the build pipeline. In this scenario tests should only be run against the new version being built so that any sporadic failures in the older versions don't cause the new version's build to fail. We already know that such sporadic failures can occur in some workflows, so multiplying that chance by a run matrix makes this problem worse and makes builds unnecessarily difficult to publish.
andrewazores commented 10 months ago

Summary of our team call: