The default values for the requested CPU resources are too high. Most pods need less than 10m when there is no load, so it is not necessary to request 200m.
stackablectl installs the operators with the default settings. These settings are adjusted for medium-sized cluster, e.g. the commons-operator requests 1Gi memory, which is too much for an integration test, but too small for a large cluster. stackablectl should allow to specify sensible presets instead.
The resources can only be specified per role group. It is not obvious how this specification is actually used. Some containers get hard-coded resources, some containers get the specified resources, and some containers get resources which are calculated based on the specified ones.
It must be documented how the final resources are derived from the specified ones. It can happen that the operator complains that the Java heap size cannot be negative. This means that hard-coded values are substracted from the specified memory which is then used as the Java heap size.
It is sometimes not possible to restrict the actual requested resources to the desired values. The specification should be extended in these cases. For instance, the init container often use hard-coded values which are often too high. These values define the minimum resource requests which cannot be lowered by defining custom resources requirements.
If two test cases which require a lot of resources, are run in parallel, then they could deadlock due to insufficient resources. This is often perceived as flakiness. The required resources per test case should be determined, and it should be ensured that the test cases with the highest resource can run a the test clusters in parallel.