Closed mnorkin closed 5 years ago
This is intentional behaviour.
@oleksiyk please correct me if I'm wrong.
Previously all the metadata fetch happened on client.init()
, which is okay, because this happens on the app initialization and if no-kafka
does not connect it retries all the time and app does not start. That's okay.
Now metadata fetch happens on client.getTopicPartitions
(which comes from producer._prepareProduceRequest
, which comes from producer._send
), which is not okay, because app has already started and the client.getTopicPartitions
promise is never resolved, hence request is just stuck.
The same scenario could have happened before if you publish to some new topic and there is network connectivity issue.
Previously if you publish to a new topic, UnknownTopicOrPartition
exception would happen and after some retries it will fail, as defined here: https://github.com/oleksiyk/kafka/pull/238/files#diff-69b3da3df139e0ac86797a46df0ef970L99
We have a test case then we establish connection to Kafka, pause Kafka instance so it does not respond to anything and publish message to it. Previously it failed with rejected promise, now it does not reject.
You are right. I will review the library's updateMetadata
behaviour later today, lets see if we can drop retries there completely.
Current implementation of update meta data request does infinite amount of retries. This causes consumers to be stuck in retry cycle. Need to break it after all retries