ActianCorp / spark-vector

Repository for the Spark-Vector connector
Apache License 2.0
20 stars 9 forks source link

Exception when unloading const columns #49

Closed and-costea closed 8 years ago

and-costea commented 8 years ago

Stack trace: java.lang.IllegalArgumentException at java.nio.Buffer.limit(Buffer.java:267) at com.actian.spark_vector.datastream.reader.DataStreamReader$$anonfun$readByteBuffer$1.apply(DataStreamReader.scala:92) at com.actian.spark_vector.datastream.reader.DataStreamReader$$anonfun$readByteBuffer$1.apply(DataStreamReader.scala:87) at com.actian.spark_vector.util.ResourceUtil$.closeResourceOnFailure(ResourceUtil.scala:39) at com.actian.spark_vector.datastream.reader.DataStreamReader$.readByteBuffer(DataStreamReader.scala:87) at com.actian.spark_vector.datastream.reader.DataStreamReader$.readByteBufferWithLength(DataStreamReader.scala:107) at com.actian.spark_vector.datastream.reader.DataStreamReader$.readWithByteBuffer(DataStreamReader.scala:112) at com.actian.spark_vector.datastream.reader.DataStreamTap.readVector(DataStreamTap.scala:38) at com.actian.spark_vector.datastream.reader.DataStreamTap.read(DataStreamTap.scala:53) at com.actian.spark_vector.datastream.reader.DataStreamTap.isEmpty(DataStreamTap.scala:59) at com.actian.spark_vector.datastream.reader.RowReader.hasNext(RowReader.scala:118) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at com.databricks.spark.csv.package$CsvSchemaRDD$$anonfun$10$$anon$1.hasNext(package.scala:160) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1108) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1108) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1206) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1116) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1095) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:88) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744)

and-costea commented 8 years ago

Made all columns non-const before sending them out in Vector. This issue is dropped.