Closed rsevilla87 closed 4 months ago
This issue has become stale and will be closed automatically within 7 days.
Bug https://issues.redhat.com/browse/MON-3394 is discovered in ROSA with large number of namespaces with big number secrets "there are a lot of secrets on the cluster: 24464". So I suggest we added at least 3 secrets per namespace to cover this. (3 secrets x 10k namespaces=30k secrets > 24464) In the old max-namespaces workload, there are 10 secrets in each namespace: https://github.com/cloud-bulldozer/e2e-benchmarking/blob/master/workloads/kube-burner/workloads/max-namespaces/max-namespaces.yml#L95C12-L95C12
This issue has become stale and will be closed automatically within 7 days.
Is your feature request related to a problem? Please describe.
Having a workload able to reproduce the documented cluster-maximums can be very useful to detect regressions of some components that are not that intensively used by the current workloads.
i.e.:
There more examples like the above. This new workload shouldn't be used as a rule of thumb to demonstrate the limits of a cluster, but as a new helper to detect and verify scenarios we're not currently tracking.
Describe the solution you'd like
The cluster-maximums workload should be self-contained, based on a multi-job benchmark. With this approach maintaining and updating will be easier.
I started coding this workload, a initial approach about how it would look like is in the following snippet: