Closed lybavsky closed 1 week ago
Having the same issue. The mutation is running, and the driver pod is running. However, the mutation doesn't appear to be able to find the execution pod and add allowPrivilegeEscalation = false
. This causes OPA to block the executor from launching as allowPrivilegeEscalation
isn't in the executor pod.
The error in there is Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. admission webhook "validation.gatekeeper.sh" denied the request: [privilege-escalation] Privilege escalation container is not allowed: spark-kubernetes-executor
. This is saying that the spark-kubernetes-executor is trying to run with privilege escalation.
Running in spark
namespace.
The executor pod is definitely not mounting the security context correctly. I removed OPA in the namespace and got an executor pod to launch:
From the above:
securityContext:
runAsUser: 185
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1377 has been merged. I just removed the operator from my cluster and don't think I'll be able to test soon, but curious to see if the behavior changes.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.
Hello everyone. We've found out that securitycontext which we put into sparkapplication custom resource like that: securityContext: allowPrivilegeEscalation: true runAsUser: 1000 capabilities: add: [ "SYS_PTRACE" ]
does not present on created executor pods: securityContext: fsGroup: 1 runAsUser: 1000 seccompProfile: type: RuntimeDefault supplementalGroups:
We are using 3.1.1 spark version and sparkoperator (spark-operator-1.1.6, v1beta2-1.2.3-3.1.1)
On master branch in this repo we see that addSecurityContext in patch.go use default container name only "executor", while 3rd version of spark using "spark-kubernetes-executor" name. It is possible to use function "findContainer", but it not using now.
Please review my merge request. https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1377
What the way to solve our problem? Thank you