ZuInnoTe / hadoopcryptoledger

Hadoop Crypto Ledger - Analyzing CryptoLedgers, such as Bitcoin Blockchain, on Big Data platforms, such as Hadoop/Spark/Flink/Hive
Apache License 2.0
141 stars 51 forks source link

spark-submit NoSuchMethodError org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$.extractTransactionData #56

Closed foonsun closed 6 years ago

foonsun commented 6 years ago

@jornfranke

I am trying to build spark-submit according to this wiki: https://github.com/ZuInnoTe/hadoopcryptoledger/wiki/Using-Spark-Scala-Graphx-to-analyze-the-Bitcoin-transaction-graph

spark-submit --class org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph --master local[8] ./target/scala-2.10/example-hcl-spark-scala-graphx-bitcointransaction.jar /user/bitcoin/input /user/bitcoin/output

It appears that the method such as extractTransactionData is not found. how i can solove it?thanks very much.

18/06/07 09:54:24 ERROR executor.Executor: Exception in task 10.0 in stage 0.0 (TID 10)
java.lang.NoSuchMethodError: scala.runtime.IntRef.create(I)Lscala/runtime/IntRef;
    at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$.extractTransactionData(SparkScalaBitcoinTransactionGraph.scala:108)
    at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60)
    at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
jornfranke commented 6 years ago

You are probably using scala 2.11 and thus you need to compile your application for scala 2.11

On 7. Jun 2018, at 12:18, foonsun notifications@github.com wrote:

@jornfranke

I am trying to build spark-submit according to this wiki: https://github.com/ZuInnoTe/hadoopcryptoledger/wiki/Using-Spark-Scala-Graphx-to-analyze-the-Bitcoin-transaction-graph

spark-submit --class org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph --master local[8] ./target/scala-2.10/example-hcl-spark-scala-graphx-bitcointransaction.jar /user/bitcoin/input /user/bitcoin/output 18/06/07 09:54:24 ERROR executor.Executor: Exception in task 10.0 in stage 0.0 (TID 10) java.lang.NoSuchMethodError: scala.runtime.IntRef.create(I)Lscala/runtime/IntRef; at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$.extractTransactionData(SparkScalaBitcoinTransactionGraph.scala:108) at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60) at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

jornfranke commented 6 years ago

The build file does compile for both scala 2.10 and 2.11 so simply take the version from the target folder that fits your environment

On 7. Jun 2018, at 12:18, foonsun notifications@github.com wrote:

@jornfranke

I am trying to build spark-submit according to this wiki: https://github.com/ZuInnoTe/hadoopcryptoledger/wiki/Using-Spark-Scala-Graphx-to-analyze-the-Bitcoin-transaction-graph

spark-submit --class org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph --master local[8] ./target/scala-2.10/example-hcl-spark-scala-graphx-bitcointransaction.jar /user/bitcoin/input /user/bitcoin/output 18/06/07 09:54:24 ERROR executor.Executor: Exception in task 10.0 in stage 0.0 (TID 10) java.lang.NoSuchMethodError: scala.runtime.IntRef.create(I)Lscala/runtime/IntRef; at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$.extractTransactionData(SparkScalaBitcoinTransactionGraph.scala:108) at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60) at org.zuinnote.spark.bitcoin.example.SparkScalaBitcoinTransactionGraph$$anonfun$1.apply(SparkScalaBitcoinTransactionGraph.scala:60) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:192) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:64) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

foonsun commented 6 years ago

@jornfranke yes,i build in scala 2.11.

jornfranke commented 6 years ago

Is your spark cluster using scala 2.10 or 2.11?

On 7. Jun 2018, at 12:43, foonsun notifications@github.com wrote:

@jornfranke yes,i build in scala 2.11.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

foonsun commented 6 years ago

my spark cluster is using scala2.11. it is wierd.

Jörn Franke notifications@github.com于2018年6月7日 周四下午7:10写道:

Is your spark cluster using scala 2.10 or 2.11?

On 7. Jun 2018, at 12:43, foonsun notifications@github.com wrote:

@jornfranke yes,i build in scala 2.11.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ZuInnoTe/hadoopcryptoledger/issues/56#issuecomment-395383021, or mute the thread https://github.com/notifications/unsubscribe-auth/AGMj1Gc6jnzV5e9MDWr_tUXTKfdXGmf9ks5t6Qo1gaJpZM4UeIcR .

foonsun commented 6 years ago

I set Scala 2.11 in build.sbt

scalaVersion := "2.11.1"

Jörn Franke notifications@github.com于2018年6月7日 周四下午7:10写道:

Is your spark cluster using scala 2.10 or 2.11?

On 7. Jun 2018, at 12:43, foonsun notifications@github.com wrote:

@jornfranke yes,i build in scala 2.11.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ZuInnoTe/hadoopcryptoledger/issues/56#issuecomment-395383021, or mute the thread https://github.com/notifications/unsubscribe-auth/AGMj1Gc6jnzV5e9MDWr_tUXTKfdXGmf9ks5t6Qo1gaJpZM4UeIcR .

jornfranke commented 6 years ago

In the spark submit command you specify the binary for scala 2.10

No need to set it to 2.11 : if you execute sbt +clean +assembly it creates binaries for scala 2.10 and 2.11

On 7. Jun 2018, at 13:12, foonsun notifications@github.com wrote:

my spark cluster is using scala2.11. it is wierd.

Jörn Franke notifications@github.com于2018年6月7日 周四下午7:10写道:

Is your spark cluster using scala 2.10 or 2.11?

On 7. Jun 2018, at 12:43, foonsun notifications@github.com wrote:

@jornfranke yes,i build in scala 2.11.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ZuInnoTe/hadoopcryptoledger/issues/56#issuecomment-395383021, or mute the thread https://github.com/notifications/unsubscribe-auth/AGMj1Gc6jnzV5e9MDWr_tUXTKfdXGmf9ks5t6Qo1gaJpZM4UeIcR .

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

foonsun commented 6 years ago

@jornfranke i use the scala 2.11 in the spark submit command. i try setting the scala version to 2.1.0,becasue if i don't set the version,it will use the scala version 2.12 and occur many errors.So I set the version in the build.sbt to 2.1.0.

I run it again ,it now seems right.I will try 2.11 later and tell you my results.

jornfranke commented 6 years ago

2.12 is not supported by Spark.

There is no need to set a specific version - the build file allows you to crosscompile to create two different binaries for 2.10 and 2.11

On 7. Jun 2018, at 13:35, foonsun notifications@github.com wrote:

@jornfranke i use the scala 2.11 in the spark submit command. i try setting the scala version to 2.1.0,becasue if i doesn't set the version,it will use the scala version 2.12 and occur mang errors.So I set the version in the build.sbt to 2.1.0.

I run it again ,it now seems right.I will try 2.11 later and tell your my results

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

foonsun commented 6 years ago

thanks very much.it is still running.it looks good until now.