Use spark.readStream.format("kafka") read kafka data and decode binary data to string
Use df.map(_.Seq.foldLeft(""))(_ + separtor + _).writeStream("kafka") output data to kafka
If I fail to output to kafka, then no matter how I change the kafka topic later, the stream computing will fail,ArrayIndexOutOfBoundsException: 1 exception will report.If I only output to the console there will be no error
If I run the same code snippet directly in spark-shell without using livy, the effect is the same as 3