An invalid K8S cronjob with jobs that do not properly close its containers may result in cluster being filled with running workloads that unnecessarily consume cluster resources. If there are many of such pods than K8S scheduler is not scheduling BTP Manager's pods. Add a priority class that will put enough priority on btp-manager-controller-manager and sap-btp-operator-controller-manager so that in similar cases they have more chance to be scheduled than problematic workloads. See Istio Module's priority class configuration for reference.
➜ ui git:(features/enable-to-load-test-data) ✗ kubectl get deployments -A -o custom-columns="NAME:.metadata.name,PRIORITY_CLASS:.spec.template.spec.priorityClassName"
NAME PRIORITY_CLASS
btp-manager-controller-manager <none>
sap-btp-operator-controller-manager <none>
AC
[x] 1) Check other modules, which PC they use, contact PO and we will decide together
[x] 2) Create a Priority Class in BTP Manager repository,
[x] 3) configure the priority class in btp-manager-controller-manager deployment,
[x] 4) configure the priority class in sap-btp-operator-controller-manager deployment,
- [ ] 5) consult with SRE to make sure that our priority class does not clash with other classes.
An invalid K8S cronjob with jobs that do not properly close its containers may result in cluster being filled with running workloads that unnecessarily consume cluster resources. If there are many of such pods than K8S scheduler is not scheduling BTP Manager's pods. Add a priority class that will put enough priority on btp-manager-controller-manager and sap-btp-operator-controller-manager so that in similar cases they have more chance to be scheduled than problematic workloads. See Istio Module's priority class configuration for reference.
AC
- [ ] 5) consult with SRE to make sure that our priority class does not clash with other classes.