danielwegener / logback-kafka-appender

Logback appender for Apache Kafka
Apache License 2.0
642 stars 262 forks source link

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. #46

Open magicdogs opened 7 years ago

magicdogs commented 7 years ago

Hi , when i had config logback.xml file and changed root level="debug" , the application log a lot of such as "org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. " information and blockeding my application , if i change logback.xml file root level to info , it work fine , why is this ? thanks a lot.

exception log information

image

lib information

image

shades198 commented 7 years ago

I am also having the same issue. Just that even after switching log level to info messages still don't go inside kafka broker

wuming333666 commented 7 years ago

i am also having the same issue

zjingchuan commented 7 years ago

i am also having the same issue 。。。

iDube commented 6 years ago

i am also having the same issue 。..

xiaods commented 6 years ago

change hostname to 0.0.0.0

feilongyang commented 6 years ago

i am also having the same issue

shikonglaike commented 6 years ago

when I make the following changes. I have the same issue, _20180118160426 @danielwegener Could you tell me why.

danielwegener commented 6 years ago

Because kafka tries to recursively log to itself which may lead it into a deadlock (which forunately is eventually resolved by the metadata timeout, but still breaks your client). The ensureDeferredAppends queues all recursive log entries and delays the actual sending until a non-kafka message is attempted to be logged that "frees" them. However, as soon as you put ALL loggers to debug, kafka internals also try to log debug information - and those are not all captured by stargsWith(KAFKA_LOGGER_PREFIXED) - and these debug logs are internal so we can not safely assume to catch all of them them while still supporting multiple version of the kafka-client library´

So the solution for now: do not enable global debug logging (rather do it selectively per package). The only really safe solution would be to shadow package the kafka-client with its transitive dependencies and replace its usages of slf4j with an implementation that either never logs to kafka itself or tags all of its messsages as messages that always get queued. But I am not really happy with that solution either (possibly licensing issues, possibly appender releases for each kafka-client release).

shikonglaike commented 6 years ago

@danielwegener ,I see and appreciate your reply,

YouXiang-Wang commented 6 years ago

@danielwegener So you need to update your configuration example form "debug" ===> "INFO".

zhaojingyang commented 6 years ago

I have the same problem. when I make the following changes. It work. send a message after super.start() in KafkaAppender.start() function.I don't known why

@Override public void start() { // only error free appenders should be activated if (!checkPrerequisites()) return;

    if (partition != null && partition < 0) {
        partition = null;
    }

    lazyProducer = new LazyProducer();

    super.start();

  ```

final byte[] payload = "sssd".getBytes(); final byte[] key = "sdsss".getBytes(); final Long timestamp = System.currentTimeMillis(); final ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(topic, partition, timestamp, key, payload); lazyProducer.get().send(record); lazyProducer.get().flush();


    }
OneYearOldChen commented 6 years ago

@magicdogs How do you solve this problem?

magicdogs commented 6 years ago

@OneYearOldChen update logback.xml file set root level to info ...

lichenglin commented 6 years ago

add this to your logback-spring.xml <logger name="org.apache.kafka" level="info"/>

danielwegener commented 6 years ago

@Birdflying1005 good point :)

danielwegener commented 6 years ago

Can you guys imagine some doc/faq-entry or something that would have helped you to not run into this issue? I'd be happy to add it to the documentation

madanctc commented 4 years ago

can you add spring.kafka.producer.retries=5 and request.timeout.ms=600000

omrryldrrm commented 2 years ago

hi, change the this parameter maxBlockTime =2000 ms