Open lukleh opened 5 years ago
Hi @lukleh , the Nomad Spark integration does not implement downscaling when using dynamic executors. Upcoming features road-mapped for Nomad 0.10.x will allow us to decouple the shuffle service processes from the executor processes and support proper downscaling. I'll leave this issue open to track that.
Nomad
0.9.1
Pyspark
2.4.3
example pyspark-shell command running against Nomad cluster:
In pyspark-shell load the following:
After
spark.dynamicAllocation.executorIdleTimeout
executors do not get killed and the following logs appear instead:which is poiting to https://github.com/hashicorp/nomad-spark/blob/nomad-spark-2.4.3/resource-managers/nomad/src/main/scala/org/apache/spark/scheduler/cluster/nomad/NomadClusterSchedulerBackend.scala#L188 missing implementation
similar issue https://github.com/hashicorp/nomad-spark/issues/20