apache-spark-on-k8s / spark

Apache Spark enhanced with native Kubernetes scheduler back-end: NOTE this repository is being ARCHIVED as all new development for the kubernetes scheduler back-end is now on https://github.com/apache/spark/
https://spark.apache.org/
Apache License 2.0
612 stars 118 forks source link

Support for smaller CPU slices for the spark worker instances #554

Closed santanu-dey closed 6 years ago

santanu-dey commented 7 years ago

I might have missed something obvious, but presently I do not find a way to assign a smaller CPU slices to the spark worker containers. Something like --conf spark.executor.cores=100m would be great to assign 100 millicores to the worker container.

Even for the driver there should be a way to control the CPU slices?

I think the presently each of the containers are taking up whole CPUs.

liyinan926 commented 7 years ago

Having a fractional value for spark.driver.cores and spark.executor.values is already supported. Please refer to https://github.com/apache-spark-on-k8s/spark/pull/361. For example, you can use --conf spark.executor.cores=0.1, which is the same as --conf spark.executor.cores=100m.

liyinan926 commented 6 years ago

@santanu-dey Have you tried using fractional values and can this issue be closed?

liyinan926 commented 6 years ago

Closing this issue. Please reopen if necessary.