Open liyinan926 opened 3 years ago
@liyinan926 - Do you have an example somewhere how to use podtemplate in sparkapplication. would be really cool. thanks
Hi, I started working on this issue, and came up with a minimal working implementation in https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/pull/1296
My approach is to put a template
property in SparkPodSpec, and serialize its contents to a temporary file before the call to spark-submit, passing the file names in their respective spark.kubernetes.driver.podTemplateFile
and spark.kubernetes.executor.podTemplateFile
conf options. The files can be cleaned up immediately after spark-submit, as it creates the required Kubernetes resources - (1) the driver pod that has the template applied to, and a (2) configmap storing the executor template for later use by the driver, - synchronously.
https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template
I didn't include the regenerated CRDs in the PR. Further items to decide before moving forward:
sparkoperator.k8s.io/v1beta2
version of the resource. I guess we want to create a new version for it.k8s.io
is reserved for Kubernetes community maintained APIs. If I am not mistaken, this means we should move away from sparkoperator.k8s.io
for our next version of the API.spark.kubernetes.driver.podTemplateContainerName
and spark.kubernetes.executor.podTemplateContainerName
confs@liyinan926 could you take a look?
Do you have an example somewhere how to use podtemplate in sparkapplication. would be really cool. thanks
For anyone else who stumbles upon this issue, here is how I was able to do it:
sparkConf
setting on your SparkApplication
to set the template file. Something like
sparkConf:
spark.kubernetes.driver.podTemplateFile: "/etc/templates/pod_template.yaml"
Hello :wave: We also experimented issues with webhook, it stops working sometimes. What about this enhancement ? Any news ? Thank you :-)
Hello Facing this same issue when using webhook. NodeAffinity is not being parsed in SparkApplication . Also webhook even though is deployed looks like the k8s cluster is not able to access it. Please let me know if any of you know a way to bypass this. Thanks .
@elihschiff does the spark-operator helm chart have any support for mounting a configmap to the operator pod? I don't see anything in the values or documentation. I definitely don't want to have to be manually editing the deployment in k8s.
Sorry this was 2 years ago, I don't remember exactly what I did. But I wouldn't be surprised if I had modified the helm chart to get it working
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Given a number of occurrences of issues with the webhook that stops working after some time due to certificate issues, I'm thinking that the right direction in the long term is to move away from it. For anyone who's already on Spark 3.0, the pod template support for driver/executor pods may be the right way to go. The operator should be able to translate driver and executor configs in
SparkApplication
s into driver and executor pod templates and use the templates when submitting applications. Creating this issue to track the work.