Open erikerlandson opened 8 years ago
This initial push contains the logic to correctly align resource requests for the containers with the spark resource settings. However, when a pod fails to schedule due to insufficient resources, it remains in Pending
state, and so there is still the possibility of some arbitrarily large number of pods being scheduled by dynamic executors, that will hang around Pending
.
Before this merges, I want to add some logic for detecting when new executors are stuck in Pending
so that it can skip trying to spin up new ones.
@erikerlandson - close this in favor of what we have on https://github.com/apache-spark-on-k8s/spark/?
Add resource reqs to driver and executor containers, corresponding to spark resource settings for cores and memory.