I would like to use a docker driver stanza to set my image information, but if i don't include --conf spark.nomad.dockerImage in my spark-submit it acts like it's trying to run directly on a nomad resource node. For example it will just repeat the following over and over:
18/07/02 15:28:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
More details:
I have a template that looks something like this:
I would like to use a docker driver stanza to set my image information, but if i don't include
--conf spark.nomad.dockerImage
in my spark-submit it acts like it's trying to run directly on a nomad resource node. For example it will just repeat the following over and over:18/07/02 15:28:34 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
More details: I have a template that looks something like this:
i want to be able to just run:
but as said that doesn't work and i'm forced to instead run:
This will overwrite anything i have set in the docker driver stanza which means that i can not use any type of interpolation.