hashicorp / nomad-spark

DEPRECATED: Apache Spark with native support for Nomad as a scheduler
44 stars 16 forks source link

Executors still queued although job computation finished in dynamic allocation and exhausted resources #26

Open lukleh opened 5 years ago

lukleh commented 5 years ago

Nomad 0.9.1
Pyspark 2.4.3

example pyspark-shell command running against Nomad cluster:

    --conf spark.nomad.sparkDistribution=local:/usr/lib/spark \
    --conf spark.dynamicAllocation.enabled=true \
    --conf spark.shuffle.service.enabled=true \
    --conf spark.dynamicAllocation.minExecutors=1

In pyspark-shell load the following:

When NOT setting spark.dynamicAllocation.maxExecutors and cluster exhausts resources, executors are still queued although all calculation finished.  image