Closed saraAlizadeh closed 6 years ago
Hi @saraAlizadeh
Have you run sbt package
to build the jar file?
You need to change to the directory where you cloned the project and execute it to create the jar file.
Another thing you should check after that is the existence of file covtypeNorm.arff
in the same directory for which you are executing script (i.e. spark.sh). Let me know if this works, or the error message if it did not work.
Best Regards, Heitor
@hmgomes
hi
i've already ran sbt package
and it said it was successful, also I see a jar file in target directory.
about covtypeNorm.arff, yes, it's in correct directory.
Hi @saraAlizadeh
I believe the issue is that you are using Spark 2.3 and StreamDM support up to Spark 2.2. You can try downloading Spark 2.2 and updating the script spark.sh to use it instead of SPARK_HOME in case you want to keep SPARK_HOME pointing to Spark 2.3, otherwise you can just update SPARK_HOME.
Best Regards, Heitor
hi @hmgomes I appreciate your answer, I checked spark 2.2 and 2.1, too. both times this error came up:
ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) java.lang.ArrayIndexOutOfBoundsException: 4 at org.apache.spark.streamdm.core.DenseInstance.add(DenseInstance.scala:84) at org.apache.spark.streamdm.core.DenseInstance.add(DenseInstance.scala:26) at org.apache.spark.streamdm.classifiers.SGDLearner$$anonfun$train$1$$anonfun$3.apply(SGDLearner.scala:91)
do you think I need to check other versions of Spark? or is there some other issue?
Hi @saraAlizadeh
I see, but this is a different error than the first one, right? Please provide me the command line for this last error so I can investigate further.
Best Regards, Heitor
I just wanted to run spark, using ./spark.sh In scripts directory
On Tue, May 1, 2018, 12:56 PM Heitor Murilo Gomes notifications@github.com wrote:
Hi @saraAlizadeh https://github.com/saraAlizadeh
I see, but this is a different error than the first one, right? �Please provide me the command line for this last error so I can investigate further.
Best Regards, Heitor
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/huawei-noah/streamDM/issues/93#issuecomment-385624212, or mute the thread https://github.com/notifications/unsubscribe-auth/APxKp9K8zRu20NbgfZUC5jHXxNdw9-oLks5tuBxTgaJpZM4Tft2N .
Hi @saraAlizadeh ,
Thanks, so it is the default configuration while executing spark.sh without any parameters.
Best Regards, Heitor
this is the error msg after running "200 EvaluatePrequential -l (meta.Bagging -l trees.HoeffdingTree) -s (FileReader -f covtypeNorm.arff -k 5810 -d 10 -i 581012) -e (BasicClassificationEvaluator -c -m) -h" 1> result_cov.txt 2> log_cov.log
.
Exception in thread "main" java.lang.Exception: Problem creating instance of class: EvaluatePrequential at com.github.javacliparser.ClassOption.cliStringToObject(ClassOption.java:139) at org.apache.spark.streamdm.streamDMJob$.main(streamDMJob.scala:55) at org.apache.spark.streamdm.streamDMJob.main(streamDMJob.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.IllegalArgumentException: Problems with option: evaluator at com.github.javacliparser.ClassOption.setValueViaCLIString(ClassOption.java:60) at com.github.javacliparser.AbstractOption.resetToDefault(AbstractOption.java:90) at com.github.javacliparser.AbstractClassOption.<init>(AbstractClassOption.java:84) at com.github.javacliparser.AbstractClassOption.<init>(AbstractClassOption.java:63) at com.github.javacliparser.ClassOption.<init>(ClassOption.java:34) at org.apache.spark.streamdm.tasks.EvaluatePrequential.<init>(EvaluatePrequential.scala:43) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at com.github.javacliparser.ClassOption.cliStringToObject(ClassOption.java:137) ... 11 more Caused by: java.lang.Exception: Class not found: BasicClassificationEvaluator at com.github.javacliparser.ClassOption.cliStringToObject(ClassOption.java:132) at com.github.javacliparser.ClassOption.setValueViaCLIString(ClassOption.java:57) ... 22 more
I don't know if there is a mistake in arguments I've provided in command, or a class is missing? do I have to add something to classpath? I mention it again that the project was build successfully.
Scala code runner version 2.13.0-M1 Spark version 1.6.3
well.. i got this working I cloned the project in a new machine, with centos 7 spark 2.1.0 scala 2.10 sbt 0.13.15 built it again, and this time the command ran successfully, with no error msg.
thank you for your patience @hmgomes
hi, i am newbie to this lib, I have installed scala, my java version is 8 and now, after compiling the lib, i enter the bellow command: ./spark.sh "200 EvaluatePrequential -l (meta.Bagging -l trees.HoeffdingTree) -s (FileReader -f covtypeNorm.arff -k 5810 -d 10 -i 581012) -e (BasicClassificationEvaluator -c -m) -h" 1> result_cov.txt 2> log_cov.log
and the result in log file is: Exception in thread "main" java.lang.AbstractMethodError at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99) at org.apache.spark.streamdm.streams.FileReader.initializeLogIfNecessary(FileReader.scala:46) at org.apache.spark.internal.Logging$class.log(Logging.scala:46) at org.apache.spark.streamdm.streams.FileReader.log(FileReader.scala:46) at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54) at org.apache.spark.streamdm.streams.FileReader.logInfo(FileReader.scala:46) at org.apache.spark.streamdm.streams.FileReader$$anonfun$init$1.apply$mcVI$sp(FileReader.scala:93) at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at org.apache.spark.streamdm.streams.FileReader.init(FileReader.scala:92) at org.apache.spark.streamdm.streams.FileReader.getExampleSpecification(FileReader.scala:106) at org.apache.spark.streamdm.tasks.EvaluatePrequential.run(EvaluatePrequential.scala:64) at org.apache.spark.streamdm.streamDMJob$.main(streamDMJob.scala:56) at org.apache.spark.streamdm.streamDMJob.main(streamDMJob.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Infrastructure details