Closed cyinll closed 8 years ago
Can you share the properties you are using.
It says Fetching from Kafka for partition 0 for fetchSize 1024
If your message size is more than this (1 KB ) , consumer won't fetch any messages.
You can set the consumer.min.fetchsizebytes to higher value ( default 1KB) so that if Back-pressure is kicked in, it should atleast pull at this min_rate.
Did you tried restart the Spark streaming job and still same issue ?
oh,i don't set consumer.min.fetchsizebytes
val kafkaProperties: Map[String, String] = Map(
"zookeeper.hosts" -> zkhosts,
"zookeeper.port" -> zkports,
"zookeeper.broker.path" -> brokerPath ,
"kafka.topic" -> topic,
"zookeeper.consumer.connection" -> "x.x.x.x:2181, x.x.x.x:2181, x.x.x.x:2181",
"zookeeper.consumer.path" -> "/spark-streaming/affiliate_click_record",
"kafka.consumer.id" -> "123",
"consumer.forcefromstart" -> "false",
"consumer.backpressure.enabled" -> "true",
"consumer.fetchsizebytes" -> "1048576",
"consumer.fillfreqms" -> "1000")
i notice the early log like this
16/10/18 17:18:17 INFO KafkaUtils: Fetching from Kafka for partition 0 for fetchSize 1048576 and bufferSize 1048576
i restart spark streaming, it work fine until now, about one day.
i check my message, i find the kafka message at offset 4951368, its serialized_value_size=1013, i think its total size is more than 1kb
Thanks for your quick reply~
the work fine few days, then, no commit offersset in zk
the last commit success log:
then the log allways:
spark: 2.10_2.0.0
kafka-spark-consumer:1.0.6
kafka:0.8.2.2