Closed johnfitzy closed 10 months ago
Thanks for the detailed description! Another user has already reported this and we have the fix in https://github.com/stackabletech/spark-k8s-operator/pull/313. Sadly this was literally 2 days after we branched off 23.11.0, so it's not part of that release. Would you be ok with using the nightly version of the spark-k8s operator?
In all cases the deployed resources should have an ownerReference to the SparkApplication, so deleting that should hopefully clean everything up
Hi, thanks. Yes I can use the nightly version at the moment. I'll keep my eye out for the next release.
I confirm that by editing the cluster role spark-k8s-clusterrole
and add - deletecollection
to the verbs section fixes the problem. I think this can be a workaround in the meantime a new version of the helm charts is published.
Thanks!
Affected version
23.11
Current and expected behavior
Following the instructions here the service account that is created for the job (pyspark-pi) doesn't have the correct permissions to delete K8s resources after the jobs finishes. Pods, ConfigMaps, PVC's and Services.
Example error:
Service Account
RoleBinding
ClusterRole
Possible solution
No response
Additional context
Environment
No response
Would you like to work on fixing this bug?
maybe