IBM / spark-s3-shuffle

A S3 Shuffle plugin for Apache Spark to enable elastic scaling for generic Spark workloads.
Apache License 2.0
38 stars 12 forks source link

Example spark-submit using spark-s3-shuffle whilst running with Dynamic Allocation #8

Open awdavidson opened 2 years ago

awdavidson commented 2 years ago

It would be good to include an example in the readme. Whilst it maybe obvious what is required for some developers, others maybe unsure.

Using the spark-s3-shuffle whilst running an application with dynamic allocation may trip some people up. Typically when dynamic allocation is enabled you are also required to enable the shuffle service. This may not be available when running spark on kubernetes and executors will fail to register with the external shuffle service. The workaround for this is to enable shuffle tracking and configure the shuffle tracking timeout to ensure executors can be gracefully removed.

For example some additional configuration required:

--conf spark.executor.extraClassPath=some.jar      # this is required so executors are aware of the S3ShuffleManager etc
--conf spark.dynamicAllocation.enabled=true
--conf spark.dynamicAllocation.shuffleTracking.enabled=true
pspoerri commented 1 year ago

Good point! Thank you! These parameters are very well documented on the Spark Configuration page.

You might also need to set

--conf spark.dynamicAllocation.shuffleTracking.timeout=0 

since the stale executors might be kept around otherwise.

I need to spend some time on this.