Open awdavidson opened 2 years ago
Good point! Thank you! These parameters are very well documented on the Spark Configuration page.
You might also need to set
--conf spark.dynamicAllocation.shuffleTracking.timeout=0
since the stale executors might be kept around otherwise.
I need to spend some time on this.
It would be good to include an example in the readme. Whilst it maybe obvious what is required for some developers, others maybe unsure.
Using the spark-s3-shuffle whilst running an application with dynamic allocation may trip some people up. Typically when dynamic allocation is enabled you are also required to enable the shuffle service. This may not be available when running spark on kubernetes and executors will fail to register with the external shuffle service. The workaround for this is to enable shuffle tracking and configure the shuffle tracking timeout to ensure executors can be gracefully removed.
For example some additional configuration required: