Open vinaynb opened 6 years ago
Are you getting any delivery reports to your handler? And if so; is it with error set or not?
yes i am getting delivery reports and they say messages are delivered successfully with offset numbers.
What do you mean when you say "with error set or not" ? How can i check if error is set or not ?
fwiw i am getting those messages on the consumer end as well so i assume that they are being delivered successfully.
Could you try setting "acks" on the default.topic.config map instead, like so:
kafka.NewProducer( & kafka.ConfigMap {
"bootstrap.servers": "localhost:9092",
"default.topic.config": kafka.ConfigMap{"acks": "all"}
})
For completeness I'm including the following doc snippet for min.insync.replicas
When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.
@edenhill yes in can confirm that it works if i set acks
default.topic.config on as you said. Producer now fails as expected with message Delivery failed: foo[0]@unset(Broker: Not enough in-sync replicas)
So i guess this a bug ?
@vinaynb although a tad bit confusing this is actually the expected behavior. Please find the following snippet from librdkafka's configuration.md
This field indicates how many acknowledgements the leader broker must receive from ISR brokers before responding to the request: 0=Broker does not send any response/ack to client, 1=Only the leader broker will need to ack the message, -1 or all=broker will block until message is committed by all in sync replicas (ISRs) or broker's
min.insync.replicas
setting before sending response.
https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
@rnpridgeon sorry but i didn't understand your comment. As per my question i have min.insync.replicas = 2
in my topic and acks=-1
in my producer and if leader node fails, shouldn't it throw exception straightaway ?
@vinaynb I apologize I thought you were previously not setting acks, going back to the top I see you actually have it set to -1
. The fact the go client is silently ignoring this is actually a bug. Actually golag shouldn't rely on a separate topic configuration at all now that librdkafka handles them both in the same configuration object.
Sorry I skimmed the initial request too quickly.
Have this bug been fixed?
How can I set acks
= 1 to my kafka producer and it actually works? Thanks.
Fix is to apply all default.topic.config properties on the parent map and don't meddle with the rdkafka default topic config at all.
@edenhill I had a problem like the one described here where messages were lost during chaos-like (eg. removing cluster nodes during load) testing due to the producer not respecting acks.
The failing producer config was:
&kafka.ConfigMap{
...
"acks": "all",
...
}
Only by updating it to be like below the issue was resolved:
&kafka.ConfigMap{
...
"acks": "all",
"default.topic.config": kafka.ConfigMap{"acks": "all"},
...
}
This is on version v0.11.6. With v1.0+ default.topic.config
has been deprecated. Have any changes/updates been done in v1.0 that should make the above work without explicitly setting the acks
property on the default.topic.config
? I want to upgrade to a more recent version but am a bit worried that the problem will be re-introduced if I remove the current ack config on topic level.
Yes this is fixed in v1.0.0, you should now put all the topic configs on the main ConfigMap and not specify a default.topic.config (since it will overwrite the topic-configs on main)
You can verify this by setting "message.timeout.ms" to 1234 and try to produce to an unavailable broker (i.e., set bootstrap.servers to "blabla"), your messages should time out within a second or two instead of the default 5 minutes.
OK, thanks for the quick response!
Description
Kafka producer should throw exception on the client when we have
min.insync.replicas=2
the partition leader node crashes unexpectedly in a 2 node kafka cluster but instead it keeps sending messages to cluster and consumer is able to pull them successfully as well.How to reproduce
My setup
1 zookeper node 2 kafka broker nodes (identical config except id,log path and listeners property) 1 producer (doing async writes) and 1 subscriber written in go using this library I am creating a topic using kafka's command line tool as below
The issue is that whenever i kill leader node of the partition, the producer and consumer still keep on pushing and pulling messages from the kafka cluster even though the min.insync.replicas setting for my topic is 2. I expect producer to throw exceptions and partition should not be allowed for writing as per the docs.
I found one more thread similar to mine wherein it was suggested to set min.insync.replicas per topic which i have done but still there are no errors on producer
Am i doing something wrong somewhere ?
Producer code
My broker config as as below
Note - this question was originally asked on stackoverflow (link)
Checklist
Please provide the following information:
LibraryVersion()
): 0.11.4 & 0.11.5ConfigMap{...}
"debug": ".."
as necessary)