spring-cloud / spring-cloud-stream-binder-kafka

Spring Cloud Stream binders for Apache Kafka and Kafka Streams
Apache License 2.0
331 stars 301 forks source link

NOT_ENOUGH_REPLICAS Errors After Upgrading The Spring Boot And Spring Cloud Version #1202

Closed omercelikceng closed 2 years ago

omercelikceng commented 2 years ago

Hello. After I increased the version in my application, I started getting errors. I have 6 active brokers in my Kafka cluster. When I run my application with the old version, I don't get any errors. However, when I run my application with the new version, I get the following error. I have 6 active brokers, but pretend there aren't enough brokers to replicate.

I think it's a bug with version mismatch.

New Project Version : spring-boot-starter-parent : 2.4.13 spring-cloud-dependencies : 2020.0.5

Old Project Version : spring-boot-starter-parent : 2.3.4.RELEASE spring-cloud-dependencies : Hoxton.SR5

Configuration :

spring:
  cloud:
    stream:
      kafka:
        bindings:
          person-topic-out:
            producer:
              configuration:
                acks: all
                retries: 2147483647
                enable:
                  idempotence: true
              topic:
                properties:
                  min.insync.replicas: 4
      binders:
        personKafka:
          type: kafka
          environment:
            spring:
              cloud:
                stream:
                  kafka:
                    binder:
                      brokers: kafkaAddress:19092
                      minPartitionCount: 8
                      autoCreateTopics: true
                      autoAddPartitions: true
                      replication-factor: 1
      bindings:
        person-topic-out:
          destination: person-topic
          contentType: application/json
          binder: personKafka
          producer:
            partition-count: 5

Error In Spring Boot App :

2022-02-24 21:57:09.677  WARN 12812 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender  : [Producer clientId=producer-1] Got error produce response with correlation id 553 on topic-partition person-topic-3, retrying (2147483372 attempts left). Error: NOT_ENOUGH_REPLICAS
2022-02-24 21:57:09.712  WARN 12812 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender  : [Producer clientId=producer-1] Got error produce response with correlation id 554 on topic-partition person-topic-6, retrying (2147483375 attempts left). Error: NOT_ENOUGH_REPLICAS
2022-02-24 21:57:09.781  WARN 12812 --- [ad | producer-1] o.a.k.clients.producer.internals.Sender  : [Producer clientId=producer-1] Got error produce response with correlation id 555 on topic-partition person-topic-3, retrying (2147483371 attempts left). Error: NOT_ENOUGH_REPLICAS
....
....
2022-02-24 21:57:09.956 ERROR 12812 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='byte[46]' to topic person-topic:

org.apache.kafka.common.KafkaException: Producer is closed forcefully.
    at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:748) ~[kafka-clients-2.6.3.jar:na]
    at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:735) ~[kafka-clients-2.6.3.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:280) ~[kafka-clients-2.6.3.jar:na]
    at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

2022-02-24 21:57:09.956 ERROR 12812 --- [ad | producer-1] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='byte[45]' to topic person-topic:

Error In Kafka Broker :

[2022-02-24 22:10:54,924] ERROR [ReplicaManager broker=6] Error processing append operation on partition person-topic-5 (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the current ISR Set(6) is insufficient to satisfy the min.isr requirement of 3 for partition person-topic-5
garyrussell commented 2 years ago

See the documentation https://docs.spring.io/spring-cloud-stream-binder-kafka/docs/3.2.1/reference/html/spring-cloud-stream-binder-kafka.html#_kafka_binder_properties

spring.cloud.stream.kafka.binder.replicationFactor

The replication factor of auto-created topics if autoCreateTopics is active. Can be overridden on each binding.

f you are using Kafka broker versions prior to 2.4, then this value should be set to at least 1. Starting with version 3.0.8, the binder uses -1 as the default value, which indicates that the broker 'default.replication.factor' property will be used to determine the number of replicas. Check with your Kafka broker admins to see if there is a policy in place that requires a minimum replication factor, if that’s the case then, typically, the default.replication.factor will match that value and -1 should be used, unless you need a replication factor greater than the minimum.

You are setting it to 1

replication-factor: 1

but your brokers require a higher number.

omercelikceng commented 2 years ago

Hi Gary. Thank you for your interest. I set the replication-factor to 3. However it still throws an error. What am I doing wrong?

omercelikceng commented 2 years ago

Sorry Gary. I finally found the problem. This is because you have given min.insync.replicas larger than the replication factor. Sorry for wasting your time. I made a small change in the project. I didn't notice it because I was confused. Should I delete Issue or close?

sobychacko commented 2 years ago

@omercelikceng I will close the issue. We will keep it for any future reference or others running into the same issue.

omercelikceng commented 2 years ago

The old version(spring-boot-starter-parent : 2.3.4.RELEASE, spring-cloud-dependencies : Hoxton.SR5) works without any errors. It just throws error on kafka broker. But it is not throwing any error in spring boot application. Just I wanted to give information. Thank you for your interest. Thank you so much.