Open petrjanda opened 9 years ago
If you are using Akka you might want to look at Scott's Akka Kafka https://github.com/sclasen/akka-kafka which integrates more seamlessly.
The system.exit(1) is meant to fail there, if you can't produce data in this case the process should die. You can override and create different implementation but for this project's example it is meant for that.
You should adjust your retry and back off of the producer so the producer will back off and wait if any broker issues. So as long as at least one broker is up you are still running. This is one of many possible implementation and each have different ways to manage the issues that might come but are often related to the calling system.
You can also call the api directly http://kafka.apache.org/082/javadoc/org/apache/kafka/clients/producer/Producer.html and create your own library for interfacing with kafka.
"if you can't produce data in this case the process should die" - fair enough. It feels though you should be able to recover from temporary kafka outage instead of killing the app (say if I get FailedToSendMessageException
I will back off for a bit and try again) - in current implementation decision is already made though.
Just to clarify I didn't manage to get Kafka down by indexing too much data, I can imagine it can handle a lot. I was just testing the case when Kafka is not running (i shot it down manually) and started later, after Akka Stream was already running.
Thanks a lot for recommendation on Akka Kafka - I didn't know that one. Also using the Producer directly make sense like you suggest.
Hi,
I am trying to build POC of Akka Streams with Kafka, but I've struggled to get things working. The use case I am interested in is what happens if Kafka is temporarily down and will kick in in the near future (say few seconds of downtime).
Given the code here https://github.com/stealthly/scala-kafka/blob/master/src/main/scala/KafkaProducer.scala#L106-114:
I am not able to recover from the scenario above and Kafka client just quits my application which seems a bit harsh. I've eventually got around this by doing nasty:
producer.producer.send(producer.kafkaMesssage(encoder.toBytes(i), null))
as most of the internals are public. It goes around the problematic
try/catch
block, but its obviously just temporary solution.Any idea about this? Would it make sense to propagate the exception up and let the application code handle it depending on the use case?