Closed jinxingwang closed 1 month ago
@jinxingwang That would be great to have!
Can any maintainer share the update on when this will be prioritized? TIA :)
Any updates on this issue? we need this for compliance with Azure Government cloud. @jinxingwang @sparsh-95
CC @EliranTurgeman
Could you go into a bit more detail on the specific compliance requirements? Is credential auto-mounting disabled in these clusters?
@jacobsalway Sure Azure Microsoft defender reported a High Severity security finding as below
"Kubernetes clusters should disable automounting API credentials" - Disable automounting API credentials to prevent a potentially compromised Pod resource to run API commands against Kubernetes clusters. For more information, see https://aka.ms/kubepolicydoc.
So in order to mitigate the above finding need to disable automounting of service account token by setting automountServiceAccountToken = false on all pods. This is not possible with the chart currently, or with the operator in general (for generated spark app pods).
I will mention the idea of the policy is "while obviously some apps require serviceAccount tokens, it shouldn't be mounted by default to avoid misuse, and instead should only be manually (and explicitly) mounted as a volume when needed", example of manually mounting to a pod:
volumes:
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
Anyway, this is a required policy for work with Azure Government cloud
@Aransh Thanks appreciate the details and the links. It would be easy enough to add this as a configurable field to the controller and webhook deployment specs in the Helm chart, however for the actual Spark driver pod to have this field it would require a change to Spark core or to the webhook in the operator or in a pod template spec.
Are both required for compliance in this environment? I would imagine so given the driver also needs a service account in order to request and watch executor pods.
@jacobsalway Yup, both are required for compliance
Hi Team, Any update on this case?
On the controller side: I'd suggest modifying the chart to add the automountServiceAccountToken
field e.g. with a Kustomize patch. I'm happy to give this a go and provide a patch YAML in another comment on this issue.
On the Spark app side: I would suggest solving this with a pod template. We will support this within the CR in whichever release https://github.com/kubeflow/spark-operator/pull/2141 ends up in.
Thanks @jacobsalway I'm working on a PR for the chart now to allow this on the operator itself without workarounds, should be straight-forward. Will take a look at the pod template feature when it's out, sounds interesting!
Anyone want me to support this automountServiceAccountToken feature into spark operator?