twitter / hadoop-lzo

Refactored version of code.google.com/hadoop-gpl-compression for hadoop 0.20
GNU General Public License v3.0
546 stars 329 forks source link

sc.textFile doesn't seem to use LzoTextInputFormat when hadoop-lzo is installed #122

Closed renanvicente closed 7 years ago

renanvicente commented 7 years ago

When reading LZO files using sc.textFile it miss a few files from time to time.

Sample:

val Data = sc.textFile(Files)
listFiles += Data.count()

Considering that Files is a HDFS directory containing LZO files. If executed for example a 1000 times it gets different results a few times.

Now if you use newAPIHadoopFile to force it to use com.hadoop.mapreduce.LzoTextInputFormat it works perfectly, shows the same results in all executions.

Sample:

val Data = sc.newAPIHadoopFile(Files,
classOf[com.hadoop.mapreduce.LzoTextInputFormat],
classOf[org.apache.hadoop.io.LongWritable],
classOf[org.apache.hadoop.io.Text]).map(_._2.toString)
listFiles += Data.count()

Looking at Spark code it looks like it use TextInputFormat by default but is not using com.hadoop.mapreduce.LzoTextInputFormat when hadoop-lzo is installed.

https://github.com/apache/spark/blob/v2.0.1/core/src/main/scala/org/apache/spark/SparkContext.scala#L795-L801

sjlee commented 7 years ago

Shouldn't this be filed with the Spark project?

renanvicente commented 7 years ago

opened on Spark: https://issues.apache.org/jira/browse/SPARK-18414