Open ravi-ramadoss opened 6 years ago
You need init container and rss defined as part of conf. Look at usage docs to see an example
I followed the steps from the page https://apache-spark-on-k8s.github.io/userdocs/running-on-kubernetes.html
I am sure I am missing something. Is there any walkthrough or example to do this?
kubectl create -f conf/kubernetes-resource-staging-server.yaml
$SPARK_HOME/bin/spark-submit \
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--master k8s://https://192.168.99.100:8443 \
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.2.0-kubernetes-0.5.0 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.2.0-kubernetes-0.5.0 \
--conf spark.kubernetes.initcontainer.docker.image=kubespark/spark-init:v2.2.0-kubernetes-0.5.0 \
--conf spark.kubernetes.resourceStagingServer.uri=http://192.168.99.100:31000 \
--py-files pi.py \
pi.py
Still I get the same error
MountVolume.SetUp failed for volume "spark-init-properties" : configmaps "spark-pi-1516935374044-init-config" not found
I am trying to test a local spark script. Whenever I try to upload a file from local Mac system to qinikube cluster, I get the below error.
I see the below Error in dashboard for the driver pod
Image: kubespark/spark-driver-py:v2.2.0-kubernetes-0.5.0 Environment variables
Commands: - Args: -