Closed wuxianxingkong closed 2 years ago
The README example you pointed to is the same as the README here: https://github.com/tensorflow/ecosystem/tree/master/spark/spark-tensorflow-connector#scala-api So there is no difference between Spark-TFRecord and Spark-Tensorflow-Connector in this regard.
Your example uses tensorflow.hadoop.io, which is a library Spark-TFRecord built upon. You can continue to use it if you feel it is more useful for you.
I don't know to to save from RDD directly. Spark-TFRecord is build on DataFrame. Maybe you can convert RDD to DataFrame first.
Spark-Tensorflow-Connector is not built by us. Please contact the authors if you want to have a scala 2.12 version of it.
@junshi15 Thanks for your reply. Using org.tensorflow:tensorflow-hadoop:1.15.0 solved my question.
In ReadME, TFRecord can only be saved using predefined schema which is not convient for dynamic column(key) in Features. If using Spark Tensorflow Connector, code can be very simple:
But when using spark-tfrecord, TFRecordFileOutputFormat is not included and schema must be predefined with List[StuctType] which means I have to iterator RDD to extract all columns(keys) and rebuild RDD with sorted columns(keys). How to save TFRecord from RDD[Example] directly?
Additionally, when using Spark 3.x, Spark Tensorflow Connector is built with scala 2.11 which is imcompatible with spark 3.x(built with scala 2.12) which means I can't import it into pom.xml for backward compatibility.