hashicorp / nomad-spark

DEPRECATED: Apache Spark with native support for Nomad as a scheduler
44 stars 16 forks source link

It's not using my docker driver settings if i don't include --conf spark.nomad.dockerImage #15

Open TygerTaco opened 6 years ago

TygerTaco commented 6 years ago

I would like to use a docker driver stanza to set my image information, but if i don't include --conf spark.nomad.dockerImage in my spark-submit it acts like it's trying to run directly on a nomad resource node. For example it will just repeat the following over and over:

18/07/02 15:28:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

More details: I have a template that looks something like this:

job "template" {
  meta {
    "spark.nomad.role" = "application"
  }
  group "executor-template" {
    task "executor-template" {
      meta {
        "spark.nomad.role" = "executor"
      }
      driver = "docker"
      config {
        image = "java"
        auth{
          server_address ="hub.docker.com"
        }
      }
    }
  }

i want to be able to just run:

spark-submit \
--class org.apache.spark.examples.JavaSparkPi \
--master nomad
--conf spark.nomad.datacenters=us-east-1a,us-east-1b,us-east-1c \
--conf spark.nomad.job.template=/spark.json \
--conf spark.nomad.sparkDistribution=https://github.com/hashicorp/nomad-spark/releases/download/v2.3.0-nomad-0.7.0-20180618/spark-2.3.0-bin-nomad-0.7.0-20180618.tgz \
--verbose \
spark-examples_2.11-2.1.0-SNAPSHOT.jar 

but as said that doesn't work and i'm forced to instead run:

spark-submit \
--class org.apache.spark.examples.JavaSparkPi \
--master nomad \
--conf spark.nomad.docker.serverAddress=hub.docker.com \
--conf spark.nomad.dockerImage=java \
--conf spark.nomad.datacenters=us-east-1a,us-east-1b,us-east-1c \
--conf spark.nomad.job.template=/spark.json \
--conf spark.nomad.sparkDistribution=https://github.com/hashicorp/nomad-spark/releases/download/v2.3.0-nomad-0.7.0-20180618/spark-2.3.0-bin-nomad-0.7.0-20180618.tgz \
--verbose \
spark-examples_2.11-2.1.0-SNAPSHOT.jar 

This will overwrite anything i have set in the docker driver stanza which means that i can not use any type of interpolation.

alexnaspo commented 6 years ago

I am seeing similar issues regarding Initial job has not accepted any resources triggering over and over again.