Closed phelps-sg closed 3 years ago
Thanks for reporting I will look into this.
It could be that the block size is larger and you need to provide a larger max block size
Am 17.01.2020 um 11:23 schrieb Steve Phelps notifications@github.com:
I've recently stated to obtain the exceptions below when using the example SparkScalaBitcoinTransactionGraph. This was working previously, and I wonder whether it is to do with the fact that I am using newer version of the bitcoin-core client (version 0.19.0.1)?
20/01/17 09:48:28 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactionInputs(BitcoinBlockReader.java:388) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactions(BitcoinBlockReader.java:318) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.readBlock(BitcoinBlockReader.java:156) at org.zuinnote.hadoop.bitcoin.format.mapreduce.BitcoinBlockRecordReader.nextKeyValue(BitcoinBlockRecordReader.java:83) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Do you have by chance a block number in mind where this issue occurred?
Which version of the library (Hadoopcryptoledger) are you using ?
Am 17.01.2020 um 13:58 schrieb Jörn Franke jornfranke@gmail.com:
Thanks for reporting I will look into this.
It could be that the block size is larger and you need to provide a larger max block size
Am 17.01.2020 um 11:23 schrieb Steve Phelps notifications@github.com:
I've recently stated to obtain the exceptions below when using the example SparkScalaBitcoinTransactionGraph. This was working previously, and I wonder whether it is to do with the fact that I am using newer version of the bitcoin-core client (version 0.19.0.1)?
20/01/17 09:48:28 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactionInputs(BitcoinBlockReader.java:388) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactions(BitcoinBlockReader.java:318) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.readBlock(BitcoinBlockReader.java:156) at org.zuinnote.hadoop.bitcoin.format.mapreduce.BitcoinBlockRecordReader.nextKeyValue(BitcoinBlockRecordReader.java:83) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Just another question : did you only use the blk.dat files? No rev or any other files?
Am 17.01.2020 um 13:58 schrieb Jörn Franke jornfranke@gmail.com:
Thanks for reporting I will look into this.
It could be that the block size is larger and you need to provide a larger max block size
Am 17.01.2020 um 11:23 schrieb Steve Phelps notifications@github.com:
I've recently stated to obtain the exceptions below when using the example SparkScalaBitcoinTransactionGraph. This was working previously, and I wonder whether it is to do with the fact that I am using newer version of the bitcoin-core client (version 0.19.0.1)?
20/01/17 09:48:28 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactionInputs(BitcoinBlockReader.java:388) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.parseTransactions(BitcoinBlockReader.java:318) at org.zuinnote.hadoop.bitcoin.format.common.BitcoinBlockReader.readBlock(BitcoinBlockReader.java:156) at org.zuinnote.hadoop.bitcoin.format.mapreduce.BitcoinBlockRecordReader.nextKeyValue(BitcoinBlockRecordReader.java:83) at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:230) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Could you resolve the problem?
no further feedback
I've recently stated to obtain the exceptions below when using the example SparkScalaBitcoinTransactionGraph. This was working previously, and I wonder whether it is to do with the fact that I am using newer version of the bitcoin-core client (version 0.19.0.1)?