Closed santanu-dey closed 6 years ago
Having a fractional value for spark.driver.cores
and spark.executor.values
is already supported. Please refer to https://github.com/apache-spark-on-k8s/spark/pull/361. For example, you can use --conf spark.executor.cores=0.1
, which is the same as --conf spark.executor.cores=100m
.
@santanu-dey Have you tried using fractional values and can this issue be closed?
Closing this issue. Please reopen if necessary.
I might have missed something obvious, but presently I do not find a way to assign a smaller CPU slices to the spark worker containers. Something like
--conf spark.executor.cores=100m
would be great to assign 100 millicores to the worker container.Even for the driver there should be a way to control the CPU slices?
I think the presently each of the containers are taking up whole CPUs.