This library provides utilities to work with Protobuf objects in SparkSQL. It provides a way to read parquet file written by SparkSQL back as an RDD of compatible protobuf object. It can also converts RDD of protobuf objects into DataFrame.
For sbt 0.13.6+
resolvers += Resolver.jcenterRepo
libraryDependencies ++= Seq(
"com.github.saurfang" %% "sparksql-protobuf" % "0.1.3",
"org.apache.parquet" % "parquet-protobuf" % "1.8.3"
)
SparkSQL is very powerful and easy to use. However it has a few limitations and schema is only detected during runtime makes developers a lot less confident that they will get things right at first time. Static typing helps a lot! This is where protobuf comes in:
Row
in Spark/SparkSQLRDD[Protobuf]
val personsPB = new ProtoParquetRDD(sc, "persons.parquet", classOf[Person])
where we need SparkContext
, parquet path and protobuf class.
This converts the existing workflow:
to
RDD[Protobuf]
val personSchema = ProtoReflection.schemaFor[Person].dataType.asInstanceOf[StructType]
RDD[Protobuf]
to DataFrame
import com.github.saurfang.parquet.proto.spark.sql._
val personsDF = sqlContext.createDataFrame(protoPersons)
For more information, please see test cases.
ProtoMessageConverter
has been improved to read from LIST specification
according to latest parquet documentation. This implementation should be backwards compatible and is able to read repeated
fields generated by writers like SparkSQL.ProtoMessageParquetInputFormat
helps the above process by correctly returning the built protobuf object as value.ProtoParquetRDD
abstract the Hadoop input format and returns an RDD of your protobuf objects from parquet files directly.ProtoReflection
infers SparkSQL schema from any Protobuf message class.ProtoRDDConversions
converts Protobuf objects into SparkSQL rows.