liangguohun / HadoopSpark

Haddoop and Spark config
Apache License 2.0
0 stars 0 forks source link

错误: 找不到或无法加载主类 RunWordCount #10

Open liangguohun opened 7 years ago

liangguohun commented 7 years ago

Description Resource Path Location Type 要是你跟着Hadoop+spark 清华出版的就会出现这个问题,这垃圾是不是套路贴牌的我不清楚,一大堆问题,就是避重就轻,反复没意义的多说,按他的配置几乎都是问题。

More than one scala library found in the build path (/usr/local/scala/lib/scala-library.jar, /home/hduser/workspace/Lib/spark-assembly-1.4.0-hadoop2.6.0.jar).At least one has an incompatible version. Please update the project build path so it contains only one compatible scala library. WordCount Unknown Scala Classpath Problem 这里的问题就是他介绍要用这个包引入,其实跟scala的库重复了。只能去掉一个

liangguohun commented 7 years ago

日他妹的介绍的系统还很多bug,超时密码登录不许切换账号才能登回去

liangguohun commented 7 years ago

安装好spark后运行spark-shell看你对应的scala编译版本

liangguohun commented 7 years ago

http://blog.csdn.net/brotherdong90/article/details/51199075 可参考这哥么的换成1.6或换新版本不过新版本不用这个东西了,二一点一个大库依赖进来

liangguohun commented 7 years ago

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Exception in thread "main" java.lang.IllegalArgumentException: System memory 468189184 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.

出现此错误在eclipse 运行环境那加上下面这个变量 1、-Dspark.testing.memory=1073741824 源代码处,可以在conf之后加上: 2、 val conf = new SparkConf().setAppName("word count") conf.set("spark.testing.memory", "2147480000")//后面的值大于512m即可 源码maven环境搭建好会会解释先人的讲解

liangguohun commented 7 years ago

如果纠结 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 提示 cp /usr/local/spark/conf/log4j.properties ~/workspace/WordCount/src/ 把这个配置哪来即可,原先是模板来着

liangguohun commented 7 years ago

导出到bin清理生成的目录运行 spark-submit --driver-memory 2g --master local[*] --class RunWordCount bin/WordCount.jar