confluentinc / schema-registry

Confluent Schema Registry for Kafka
https://docs.confluent.io/current/schema-registry/docs/index.html
Other
2.2k stars 1.11k forks source link

Schema registry unnecessary rebalancing when leader gets deleted. #2492

Open Shivam0609 opened 1 year ago

Shivam0609 commented 1 year ago

Problem Description :

When running multiple replicas of Schema registry for HA, 1 pod is elected as leader while other are workers. When we are deleting leader pod then it is electing one of the other pod as a new leader. But when new pod scales up , again new pod is elected as leader which causes few seconds of downtime and results in failure of schemas creation while rebalancing. Seems rebalancing is happenning twice.

Expected behaviour :

When running in HA , i.e, multiple replicas of schema registry. Deleting leader pod should elect new leader only once and process further requests without switching leader untill it's deleted again.

Steps to Reproduce :

  1. Run multiple replicas of Schema registry using Deployent or Statefulset.
  2. Delete leader pod and immediately create multiple schemas.
  3. Failure is observed for some of the schemas creation.

Logs

kubectl get po -n kafka -o wide
NAME                        READY   STATUS     IP        
cp-schema-registry-0        1/1     Running    10.0.7.153
cp-schema-registry-1        1/1     Running    10.0.8.175
cp-schema-registry-2        1/1     Running    10.0.32.182 (Current Leader)

kubectl logs cp-schema-registry-0 -n kafka | grep leader
[2023-01-12 06:41:55,177] INFO Finished rebalance with leader election result: Assignment{version=1, error=0, leader='sr-1-6f6c7d7c-e739-400b-9904-620bc282350c', leaderIdentity=version=1,host=10.0.32.182,port=8081,scheme=http,leaderEligibility=true} (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)

Deleted Leader Pod, New pod scales up with ip (10.0.36.86)

kubectl get po -n kafka -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP        
cp-schema-registry-0        1/1     Running   0          21h     10.0.7.153
cp-schema-registry-1        1/1     Running   0          6m39s   10.0.8.175
cp-schema-registry-2        1/1     Running   0          36s     10.0.36.86

Logs for leader election

[2023-01-12 06:40:55,135] INFO Rebalance started (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
[2023-01-12 06:41:55,177] INFO Finished rebalance with leader election result: Assignment{version=1, error=0, leader='sr-1-6f6c7d7c-e739-400b-9904-620bc282350c', leaderIdentity=version=1,host=10.0.32.182,port=8081,scheme=http,leaderEligibility=true} (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
.....
..
.
Caused by: io.confluent.kafka.schemaregistry.exceptions.UnknownLeaderException: Register schema request failed since leader is unknown
....
..
###### form below you can see existing pod is elected as leader (10.0.7.153- cp-schema-registry-0)
io.confluent.kafka.schemaregistry.rest.exceptions.RestRequestForwardingException: Error while forwarding register schema request to the leader
[2023-01-12 06:44:25,188] INFO Rebalance started (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
[2023-01-12 06:44:25,216] INFO Finished rebalance with leader election result: Assignment{version=1, error=0, leader='sr-1-a5c21494-c85b-4296-9471-7e7c523c3178', leaderIdentity=version=1,host=10.0.7.153,port=8081,scheme=http,leaderEligibility=true} (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
....
..
.
##### Once new pod is up and running again rebalancing happened (10.0.36.86- cp-schema-registry-2) is elected as leader
[2023-01-12 06:44:46,213] INFO Rebalance started (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
[2023-01-12 06:44:46,218] INFO Finished rebalance with leader election result: Assignment{version=1, error=0, leader='sr-1-bf1e2a4d-a3f0-4817-a856-87cd1aaab60d', leaderIdentity=version=1,host=10.0.36.86,port=8081,scheme=http,leaderEligibility=true} (io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector)
...
..
.

Sometime leader is not even gets elected until new pod is up and running.

Additional Information :

Cp-Schema registry Image Used : 6.2.5

Deployment manifest

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cp-schema-registry
  namespace: kafka
spec:
  replicas: 3
  selector:
    matchLabels:
      app: cp-schema-registry
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: cp-schema-registry
    spec:
      containers:
      - env:
        - name: SCHEMA_REGISTRY_HOST_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
          value: SSL://<host>:9094,PLAINTEXT://<host>:9092
        - name: SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL
          value: <host>:2181,<host>:2181
        - name: SCHEMA_REGISTRY_LISTENERS
          value: http://0.0.0.0:8081
        - name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
          value: SSL
        - name: SCHEMA_REGISTRY_HEAP_OPTS
          value: -Xms512M -Xmx512M
        image: docker.io/confluentinc/cp-schema-registry:6.2.5
        imagePullPolicy: IfNotPresent
        name: cp-schema-registry
        ports:
        - containerPort: 8081
          protocol: TCP
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 100m
            memory: 1Gi
      restartPolicy: Always
      terminationGracePeriodSeconds: 10
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            app: cp-schema-registry
        maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: ScheduleAnyway
      - labelSelector:
          matchLabels:
            app: cp-schema-registry
        maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: ScheduleAnyway

Kubernetes cluster : AWS EKS

Kubernetes version :

root:~ kubectl version --short

Client Version: v1.23.9
Server Version: v1.23.13-eks-fb459a0
Shivam0609 commented 1 year ago

Any update on this issue.