Closed MLNW closed 6 years ago
Which version of the consumer you are running ? It is due to version of your Spark (1.6.0) and version in pom doesn't match. You can git clone the code and update the consumer pom to match your version and try. But using spark 1.6 you may see couple of compilation issue which are easy to solve.
Here are the steps you can try.
git clone the latest code.
modify pom.xml to match your kafka and spark version ( including scala version)
e.g.
**
Or another option is use consumer version 1.0.9. That will work with Spark 1.6
<dependency>
<groupId>dibbhatt</groupId>
<artifactId>kafka-spark-consumer</artifactId>
**<version>1.0.9</version>**
</dependency>
Here is the V 1.0.9 READ ME https://github.com/dibbhatt/kafka-spark-consumer/tree/117f98ccf02ad4f6e5a8b8918b5db097e7d3a3d4
Thank you for your quick response!
I used your first approach and modified the latest code to use my versions of Kafka, Spark and Scala. Seems to work.
I will do some more extensive testing during this week. If I find anything else I'll let you know.
Cheers!
Perfect. Do let me know if you see any issues or need any help on tuning various knobs .
When spark job was submitted The system loaded the default jar of CDH(spark-assembly-1.6.0-cdh5.14.4-hadoop2.6.0-cdh5.14.4.jar)。The Kafka version is not 010。(0.9.0)
Hi @LinMingQiang , in your Application pom, what version of jars you have specified ?
spark 1.6.0 kafka 0.10.0
Whats the issue you see ? Is the streaming job not running ?
I'm trying to use this library with older versions of Spark (1.6.0-cdh5.11.1) and Kafka (0.10.2-kafka-2.2.0), but while trying to persist the offsets after the application logic happened I get the mentioned error.
It seems to me that it is a version miss match between Scala versions. For me its not easy to switch to 2.11 scala so I guess my question would be: Is there a way to make your library work with my versions?
Below is the observed exception and the important bits of my pom file: