[EXPERIMENTAL] This repo includes deployment instructions for running HDFS/Spark inside docker containers. Also includes spark-notebook and HDFS FileBrowser.
689
stars
374
forks
source link
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources #72
I try do simple application with scala spark as follow:
val conf = new SparkConf().setMaster("spark://192.168.20.108:7077")
.setAppName("BI-SERVICE")
val numbersRdd = sc.parallelize((1 to 10000).toList)
numbersRdd.saveAsTextFile("hdfs://192.168.20.108:8020/numbers-as-text02")
However spark job still running and show warning message:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
and not write hdfs file.
Anyone can help to resolve the problem?
I try do simple application with scala spark as follow:
However spark job still running and show warning message:
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
and not write hdfs file. Anyone can help to resolve the problem?