Apache Spark enhanced with native Kubernetes scheduler back-end: NOTE this repository is being ARCHIVED as all new development for the kubernetes scheduler back-end is now on https://github.com/apache/spark/
with the conf spark.kubernetes.hadoop.conf.configmap.name, we can re-use the existing hadoop-conf-file instead create a new one indicate by export HADOOP_CONF_DIR=xxx
…oop conf configmap
Signed-off-by: forrestchen forrestchen@tencent.com
What changes were proposed in this pull request?
see issue #580
How was this patch tested?
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests) manual tests
I manually run the pagerank example by following script
with the conf
spark.kubernetes.hadoop.conf.configmap.name
, we can re-use the existing hadoop-conf-file instead create a new one indicate byexport HADOOP_CONF_DIR=xxx