strimzi / strimzi-kafka-operator

Apache Kafka® running on Kubernetes
https://strimzi.io/
Apache License 2.0
4.86k stars 1.3k forks source link

[Bug] Rolling update never finishes when the pods are not ready #3710

Closed scholzj closed 4 years ago

scholzj commented 4 years ago

When for some reason the Kafka pods are not ready / crash-looping, the KafkaRoller needs to restart them to fix the issues. But instead it is endlessly trying to connect to them and never restarts them.

Apart from not restarting the pods, it is also taking very long time to timeout while saying it timeouts only for short time. After cca 20 minutes you get the message

2020-09-25 20:08:53 INFO  KafkaRoller:270 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0, giving up after 10 attempts/127750ms

where it says it did the attempts in 127 seconds while in reality it took 20 minutes.

The full log is below. This should be fixed in 0.20.0 - I think this is regression.

2020-09-25 19:47:42 INFO  AbstractOperator:217 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Kafka my-cluster should be created or updated
2020-09-25 19:47:45 WARN  VersionUsageUtils:60 - The client is using resource type 'poddisruptionbudgets' with unstable version 'v1beta1'
2020-09-25 19:47:48 WARN  VersionUsageUtils:60 - The client is using resource type 'ingresses' with unstable version 'v1beta1'
2020-09-25 19:48:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 250ms
2020-09-25 19:48:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 250ms
2020-09-25 19:49:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 250ms
2020-09-25 19:49:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:49:40 WARN  VersionUsageUtils:60 - The client is using resource type 'kafkamirrormakers' with unstable version 'v1beta1'
2020-09-25 19:49:40 WARN  VersionUsageUtils:60 - The client is using resource type 'kafkaconnects' with unstable version 'v1beta1'
2020-09-25 19:49:40 WARN  VersionUsageUtils:60 - The client is using resource type 'kafkamirrormaker2s' with unstable version 'v1alpha1'
2020-09-25 19:49:41 WARN  VersionUsageUtils:60 - The client is using resource type 'kafkabridges' with unstable version 'v1alpha1'
2020-09-25 19:49:41 WARN  VersionUsageUtils:60 - The client is using resource type 'kafkarebalances' with unstable version 'v1alpha1'
2020-09-25 19:49:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 500ms
2020-09-25 19:49:50 WARN  AbstractOperator:377 - Reconciliation #1(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 19:50:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 500ms
2020-09-25 19:50:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 500ms
2020-09-25 19:51:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 1000ms
2020-09-25 19:51:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:51:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 1000ms
2020-09-25 19:51:50 WARN  AbstractOperator:377 - Reconciliation #2(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 19:52:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 1000ms
2020-09-25 19:52:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 2000ms
2020-09-25 19:53:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 2000ms
2020-09-25 19:53:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:53:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 2000ms
2020-09-25 19:53:50 WARN  AbstractOperator:377 - Reconciliation #3(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 19:54:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 4000ms
2020-09-25 19:54:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 4000ms
2020-09-25 19:55:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 4000ms
2020-09-25 19:55:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:55:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 8000ms
2020-09-25 19:55:50 WARN  AbstractOperator:377 - Reconciliation #4(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 19:56:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 8000ms
2020-09-25 19:56:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 8000ms
2020-09-25 19:57:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 16000ms
2020-09-25 19:57:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:57:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 16000ms
2020-09-25 19:57:50 WARN  AbstractOperator:377 - Reconciliation #5(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 19:58:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 16000ms
2020-09-25 19:58:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 32000ms
2020-09-25 19:59:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 32000ms
2020-09-25 19:59:40 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 19:59:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 32000ms
2020-09-25 19:59:50 WARN  AbstractOperator:377 - Reconciliation #6(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 20:00:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 64000ms
2020-09-25 20:00:49 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 64000ms
2020-09-25 20:01:19 INFO  KafkaRoller:277 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Error getting broker config, retrying after at least 64000ms
2020-09-25 20:01:39 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 20:01:49 WARN  AbstractOperator:377 - Reconciliation #7(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 20:01:53 INFO  KafkaRoller:440 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Pod 2 needs to be restarted. Reason: [Pod has old generation]
2020-09-25 20:03:39 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 20:03:49 WARN  AbstractOperator:377 - Reconciliation #8(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 20:03:53 INFO  KafkaRoller:270 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 2, giving up after 10 attempts/127750ms
io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: An error while trying to determine rollability
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$canRoll$12(KafkaRoller.java:544) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:606) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.canRoll(KafkaRoller.java:543) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:352) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$5(KafkaRoller.java:255) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:602) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    ... 9 more
2020-09-25 20:03:53 WARN  KafkaAvailability:68 - org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064233322, tries=1, nextAllowedTryMs=1601064233423) timed out at 1601064233323 after 1 attempt(s)
2020-09-25 20:03:53 WARN  KafkaAvailability:86 - Error determining whether it is safe to restart pod 2
org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064233322, tries=1, nextAllowedTryMs=1601064233423) timed out at 1601064233323 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
2020-09-25 20:04:23 INFO  KafkaRoller:440 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Pod 1 needs to be restarted. Reason: [Pod has old generation]
2020-09-25 20:05:39 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 20:05:49 WARN  AbstractOperator:377 - Reconciliation #9(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 20:06:23 INFO  KafkaRoller:270 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 1, giving up after 10 attempts/127750ms
io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: An error while trying to determine rollability
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$canRoll$12(KafkaRoller.java:544) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:606) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.canRoll(KafkaRoller.java:543) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:352) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$5(KafkaRoller.java:255) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:602) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    ... 9 more
2020-09-25 20:06:23 WARN  KafkaAvailability:68 - org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064383446, tries=1, nextAllowedTryMs=1601064383547) timed out at 1601064383447 after 1 attempt(s)
2020-09-25 20:06:23 WARN  KafkaAvailability:86 - Error determining whether it is safe to restart pod 1
org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064383446, tries=1, nextAllowedTryMs=1601064383547) timed out at 1601064383447 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
2020-09-25 20:06:53 INFO  KafkaRoller:440 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Pod 0 needs to be restarted. Reason: [Pod has old generation]
2020-09-25 20:07:39 INFO  ClusterOperator:125 - Triggering periodic reconciliation for namespace myproject...
2020-09-25 20:07:49 WARN  AbstractOperator:377 - Reconciliation #10(timer) Kafka(myproject/my-cluster): Failed to acquire lock lock::myproject::Kafka::my-cluster within 10000ms.
2020-09-25 20:08:53 INFO  KafkaRoller:270 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Could not roll pod 0, giving up after 10 attempts/127750ms
io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: An error while trying to determine rollability
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$canRoll$12(KafkaRoller.java:544) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:606) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.canRoll(KafkaRoller.java:543) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:352) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$5(KafkaRoller.java:255) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:602) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    ... 9 more
2020-09-25 20:08:53 WARN  KafkaAvailability:68 - org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064533534, tries=1, nextAllowedTryMs=1601064533635) timed out at 1601064533535 after 1 attempt(s)
2020-09-25 20:08:53 WARN  KafkaAvailability:86 - Error determining whether it is safe to restart pod 0
org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1601064533534, tries=1, nextAllowedTryMs=1601064533635) timed out at 1601064533535 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
2020-09-25 20:08:53 ERROR AbstractOperator:238 - Reconciliation #0(watch) Kafka(myproject/my-cluster): createOrUpdate failed
io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: An error while trying to determine rollability
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$canRoll$12(KafkaRoller.java:544) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:606) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.canRoll(KafkaRoller.java:543) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:352) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$5(KafkaRoller.java:255) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:602) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    ... 9 more
2020-09-25 20:08:53 INFO  OperatorWatcher:40 - Reconciliation #11(watch) Kafka(myproject/my-cluster): Kafka my-cluster in namespace myproject was MODIFIED
2020-09-25 20:08:53 WARN  AbstractOperator:470 - Reconciliation #0(watch) Kafka(myproject/my-cluster): Failed to reconcile
io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: An error while trying to determine rollability
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$canRoll$12(KafkaRoller.java:544) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:606) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.canRoll(KafkaRoller.java:543) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:352) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$5(KafkaRoller.java:255) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
    at io.strimzi.operator.cluster.operator.resource.KafkaRoller.await(KafkaRoller.java:602) ~[io.strimzi.cluster-operator-0.20.0-SNAPSHOT.jar:0.20.0-SNAPSHOT]
    ... 9 more
pantaoran commented 3 years ago

Hello, I found this from Google when analyzing my problem.

I am caught in a loop that repeats like this:

2021-08-12 14:13:02 INFO  KafkaAvailability:121 - strimzi.cruisecontrol.partitionmetricsamples/0 will be underreplicated (|ISR|=1 and min.insync.replicas=1) if broker 1 is restarted.
2021-08-12 14:13:02 INFO  KafkaRoller:296 - Reconciliation #25887(timer) Kafka(kafka-devl/cluster-main): Could not roll pod 1 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$UnforceableProblem: Pod cluster-main-kafka-1 is currently not rollable, retrying after at least 32000ms
2021-08-12 14:13:04 INFO  KafkaRoller:504 - Reconciliation #25887(timer) Kafka(kafka-devl/cluster-main): Pod 2 needs to be restarted. Reason: [Pod has old generation]
2021-08-12 14:13:05 INFO  KafkaRoller:296 - Reconciliation #25887(timer) Kafka(kafka-devl/cluster-main): Could not roll pod 2 due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Pod cluster-main-kafka-2 is currently the controller and there are other pods still to roll, retrying after at least 32000ms

There is no progress. I'm running AMQ Streams 1.7 (equivalent to Strimzi 0.22). The mentioned CruiseControl topic has replication factor 2.

To be fair I will admit that I'm trying something "weird": I scaled down my test Kafka cluster from 4 to 3 brokers in the custom resource to see what would happen. Is this behavior expected? I thought maybe the cluster operator would refuse to scale down the cluster, but it did not. Would it be considered my responsibility to move all replicas away from the broker that will be removed before doing so?

scholzj commented 3 years ago

Before you scale down the Kafka cluster, you have to move the replicas from the last node which will be removed by the scale-down. That has to be right now done manually.

pantaoran commented 3 years ago

I see. Thanks for your reply! Would it be possible to instruct CruiseControl to do such an operation of "emptying" a broker?

scholzj commented 3 years ago

I think that requires further development and integration.

ywehrli commented 3 years ago

We're having more a less the same issue. We upgraded from Kafka 2.7 to 2.8 in our Kafka CR. Now the operator is caught in a loop:

2021-10-25 12:45:53 ERROR AbstractOperator:274 - Reconciliation #34326(watch) Kafka(my-namespaces/my-cluster): createOrUpdate failed io.strimzi.operator.cluster.operator.resource.KafkaRoller$ForceableProblem: Pod my-cluster-kafka-0 is currently the controller and there are other pods still to roll at io.strimzi.operator.cluster.operator.resource.KafkaRoller.restartIfNecessary(KafkaRoller.java:358) ~[io.strimzi.cluster-operator-0.24.0.jar:0.24.0] at io.strimzi.operator.cluster.operator.resource.KafkaRoller.lambda$schedule$6(KafkaRoller.java:277) ~[io.strimzi.cluster-operator-0.24.0.jar:0.24.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]

After deleting broker-0 the cluster performed a rolling update and all was finde again. Interesting is also, that we had a partition which was not in sync before doing this. Performing rolling updated on the cluster did not fix the partition issue. Now after going to 2.8.0 AND deleting broker-0 to trigger a rolling restart this problem is gone as well. I don't think this is wanted behaviour.