Open Bilwang129 opened 6 years ago
Which version of Kubernetes are you using? Can you run the following command?
kubectl describe limits --namespace=automodel
version of Kubernetes : v1.8.5
I can run kubectl describe limits --namespace=automodel
but got nothing
@liyinan926
OK, then it makes sense why it said must specify limits.cpu,limits.memory,requests.cpu,requests.memory
since the namespace you used doe not have a default value for any of them. It's weird that the one using the container-local example jar worked fine. I suspect there's a bug in the code such that when local dependencies need to be uploaded and an additional step is needed to setup the init-container, the resource requests set in the BaseDriverConfigurationStep
got lost.
I also run the following command to specify limits.cpu,limits.memory,requests.cpu,requests.memory ( requests.memory : spark.driver.memory=500M requests.cpu : spark.driver.cores=0.1 limits.memory : spark.driver.memory + spark.kubernetes.driver.memoryOverhead=900M limits.cpu : spark.kubernetes.driver.limit.cores=1 )
export SPARK_HOME=/home/hadoop/nan.wang/spark-2.2.0-k8s-0.5.0-bin-2.7.3 ${SPARK_HOME}/bin/spark-submit \ --deploy-mode cluster \ --class org.apache.spark.examples.SparkPi \ --kubernetes-namespace automodel \ --conf spark.executor.instances=5 \ --conf spark.app.name=spark-pi \ --conf spark.driver.memory=500M \ --conf spark.executor.memory=500M \ --conf spark.kubernetes.driver.memoryOverhead=400M \ --conf spark.kubernetes.executor.memoryOverhead=400M \ --conf spark.driver.cores=0.1 \ --conf spark.executor.cores=0.1 \ --conf spark.kubernetes.driver.limit.cores=1 \ --conf spark.kubernetes.executor.limit.cores=1 \ --conf spark.kubernetes.driver.docker.image=sz-pg-oam-docker-hub-001.tendcloud.com/library/spark-driver:v2.2.0-kubernetes-0.5.0 --conf spark.kubernetes.executor.docker.image=sz-pg-oam-docker-hub-001.tendcloud.com/library/spark-executor:v2.2.0-kubernetes-0.5.0 --conf spark.kubernetes.initcontainer.docker.image=sz-pg-oam-docker-hub-001.tendcloud.com/library/spark-init:v2.2.0-kubernetes-0.5.0 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \ --conf spark.kubernetes.resourceStagingServer.uri=http://172.20.0.115:30001 \ ${SPARK_HOME}/examples/jars/spark-examples_2.11-2.2.0-k8s-0.5.0.jar
But also does not work .
But also does not work .
What doesn't work here? You were not able to run the above example (even if it used the container-local example jar)? Or something else?
I have modified the above command. It does not work when using Dependency Management
OK. It looks like a bug.
Is there other way to run the local application jar in the submitting machine
@liyinan926
@Bilwang129 if you have access to an HDFS cluster, or cloud storage options such as S3, you can upload the jars to those places, and use the remote URLs of those jars. Spark can automatically download them.
@liyinan926 @foxish When running the following command:(run the local jar by Dependency Management)
I get the following errors:
But when running the following command(with the same parameters,run the jar in the Docker image):
It run successfully.And resources quota like the following:
I want to know which conf can I specify specify limits.cpu,limits.memory,requests.cpu,requests.memory of driver & executor in spark-submit