Open foxish opened 7 years ago
I like where this is going. Luckily, I have a working prototype of this I wrote a couple months ago. It runs as a pod and essentially uses the special scheduler pod annotation to pick pods and assign them to bestfit (based on some heuristics) nodes. We can even customize this even further. If this is something we might like to explore, I'll talk to the right channels and see if it's something we can share with the group.
/cc @ccarrizo @khrisrichardson
How would this work with dynamic allocation, where we would prefer that the job is as elastic as possible? I suppose in the dynamic allocation case we would like to be able to also scale the reserved amount. Or, in this case, are we only reserving the minimum resource requirement?
extending thoughts in https://github.com/apache-spark-on-k8s/spark/issues/133#issuecomment-282371564