Closed mohitsharma-in closed 3 years ago
Hi @mohit94614!
Could you share the OPA policies you are setting? I'll try to reproduce the issue and see if it could be solved with a specific configuration of the chart.
@pablogalegoc - we are implementing the OPA via Tanzu Mission Control
this is the policy we have right now.
tmc clustergroup security-policy get --cluster-group-name=tkg-wdc-cg custom-policy
fullName:
clusterGroupName: tkg-wdc-cg
name: custom-policy
orgId: my_ORG_id
meta:
creationTime: "2021-06-30T05:44:39.657007Z"
resourceVersion: "3014"
uid: my_UID
updateTime: "2021-06-30T05:49:13.770065Z"
spec:
input:
allowedVolumes:
- configMap
- secret
- emptyDir
- persistentVolumeClaim
- projected
- nfs
- downwardAPI
linuxCapabilities:
allowedCapabilities:
- '*'
requiredDropCapabilities: []
runAsUser:
rule: MustRunAsNonRoot
namespaceSelector:
matchExpressions:
- key: security-policy
operator: In
values:
- custom
recipe: custom
recipeVersion: v1
type: security-policy
@mohit94614 @govindkailas I've reproduced the problem but I'm afraid on version 10.9.0
you will not be able to solve it through parameters. It is not until 11.1.0
that there's support for setting the full containerSecurityContext
on the values.yaml
. From this version onwards I've set:
containerSecurityContext:
privileged: false
allowPrivilegeEscalation: false
and the pods get created without issues.
@pablogalegoc - thank you for the quick fix, do you think this can be ported on the other charts as well. We are using kubeapps
and all of the charts would have the same trouble.
It should, TMC security policies are implemented using Gatekeeper and Gatekeeper policies are not that complicated, they just check for those fields in the spec (example). If the chart allows setting those parameters of the securityContext they should be good to go.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
We are trying to deploy Redis helm chart version 10.9.0 and app version 6.0.8. on a Kubernetes cluster where we have enabled strict security policy with the help of OPA which expects the container to run a non-root along with privilage escalation with below setting along with running a non-root i.e 1001
allowPrivilegeEscalation: false #drop privilege escalation
privileged: false #run as non-privileged container
we are seeing issue the below issue when we do a helm install
Also we have also enabled psp on the helm chart below is the helm chart values we used for deployment
Also we found out that this is issue is resolved when we manually edit sts and add required properties under security context
Is there a way to get this fixed by modifying some of the helm properties or some other way?
Thanks