pegasystems / pega-helm-charts

Orchestrate a Pega Platform™ deployment by using Docker, Kubernetes, and Helm to take advantage of Pega Platform Cloud Choice flexibility.
https://community.pega.com/knowledgebase/articles/cloud-choice
Apache License 2.0
123 stars 194 forks source link

Add ability to configure resources (cpu and memory) for init containers #618

Closed micgoe closed 3 weeks ago

micgoe commented 1 year ago

Is your feature request related to a problem? Please describe.

The number of deployed init containers for jobs and deployments varies based on the charts execution mode. However, a recurring issue arises due to the absence of resource requests and limits assigned to these init containers. This issue is exacerbated by our OpenShift Container Platform (OCP) cluster's SecurityContextConstraint, which mandates the presence of resource requests and limits for deployed containers. Consequently, the pod creation process is hindered, as the pod definitions within Jobs or Deployments include init containers that lack the necessary resource quota specifications.

Init-Containers without resource quotas I am aware of Pega Chart Jobs:

Deployment:

Backingservices Chart Deployment

Describe the solution you'd like

Any container defined in pods created by the Helm chart must support resource quota configuration via the helm chart. Not every container needs separate configuration. For instance, the 'k8s-wait-for' containers can likely share the same resource quotas defined once in the 'values.yaml'.

Describe alternatives you've considered

Use Kustomize post processing to manually patch resources with required resource quotas.

micgoe commented 1 year ago

I am open for contribution but I first wanted to get a discussion started and agree on a solution to implement

Saurabh-16 commented 1 year ago

Hi @micgoe ,

We have merged a PR that takes care of assigning cpu and memory to init containers in order to support namespace with resource quota limits.

https://github.com/pegasystems/pega-helm-charts/pull/622

For Backing service chart , I am tagging @reddy-srinivas to make the similar change for wait-for-internal-es-cluster to have cpu and memory limit.

micgoe commented 5 months ago

Can we make them configurable via variables in helm-chart instead of hard-coding?

kishorv10 commented 4 months ago

The wait-for containers are lightweight and require minimal CPU and memory resources. Customizations are not permitted on the init containers to avoid additional workload for clients. Creating variables for them may not provide significant value and could lead to unnecessary configuration in the value.yaml file. Do you have specific use case to configure them very different from the defaults?

github-actions[bot] commented 1 month ago

This issue has been marked as stale because it has been open for 60 days with no activity. This issue will be automatically closed in 30 days if no further activity occurs.

github-actions[bot] commented 3 weeks ago

This issue was closed because it has been inactive for 30 days since being marked as stale.