OT-CONTAINER-KIT / redis-operator

A golang based redis operator that will make/oversee Redis standalone/cluster/replication/sentinel mode setup on top of the Kubernetes.
https://ot-redis-operator.netlify.app/
Apache License 2.0
832 stars 229 forks source link

Follower pod failed after recreation #239

Closed dm3ch closed 2 years ago

dm3ch commented 2 years ago

What version of redis operator are you using? 0.9.0

This is not full logs but previous logs for this redis cluster are very simmilar

2022-02-11T21:35:09.912Z    INFO    controller_redis    Redis service get action is successful  {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower-headless"}
2022-02-11T21:35:09.914Z    INFO    controller_redis    Syncing Redis service with defined properties   {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower-headless"}
2022-02-11T21:35:10.001Z    INFO    controller_redis    Redis service updation is successful    {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower-headless"}
2022-02-11T21:35:10.005Z    INFO    controller_redis    Redis service get action is successful  {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower"}
2022-02-11T21:35:10.008Z    INFO    controller_redis    Syncing Redis service with defined properties   {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower"}
2022-02-11T21:35:10.015Z    INFO    controller_redis    Redis service updation is successful    {"Request.Service.Namespace": "fut-1", "Request.Service.Name": "redis-cluster-cinema-follower"}
2022-02-11T21:35:10.034Z    INFO    controller_redis    Redis statefulset get action was successful {"Request.StateFulSet.Namespace": "fut-1", "Request.StateFulSet.Name": "redis-cluster-cinema-leader"}
2022-02-11T21:35:10.057Z    INFO    controller_redis    Redis statefulset get action was successful {"Request.StateFulSet.Namespace": "fut-1", "Request.StateFulSet.Name": "redis-cluster-cinema-follower"}
2022-02-11T21:35:10.057Z    INFO    controllers.RedisCluster    Creating redis cluster by executing cluster creation commands   {"Request.Namespace": "fut-1", "Request.Name": "redis-cluster-cinema", "Leaders.Ready": "3", "Followers.Ready": "3"}
2022-02-11T21:35:10.070Z    INFO    controller_redis    Successfully got the ip for redis   {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema-leader-0", "ip": "10.52.97.6"}
2022-02-11T21:35:10.097Z    INFO    controller_redis    Redis cluster nodes are listed  {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema", "Output": "89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785326 1644594783312 2 connected\nb1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615309000 3 connected 10923-16383\n50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 myself,master - 0 1644615308000 1 connected 0-5460\n303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615309538 1 connected\n4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615308025 3 connected\n15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615309740 2 connected 5461-10922\n"}
2022-02-11T21:35:10.097Z    INFO    controller_redis    Total number of redis nodes are {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema", "Nodes": "6"}
2022-02-11T21:35:10.097Z    INFO    controllers.RedisCluster    Redis leader count is desired   {"Request.Namespace": "fut-1", "Request.Name": "redis-cluster-cinema"}
2022-02-11T21:35:10.105Z    INFO    controller_redis    Successfully got the ip for redis   {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema-leader-0", "ip": "10.52.97.6"}
2022-02-11T21:35:10.108Z    INFO    controller_redis    Redis cluster nodes are listed  {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema", "Output": "89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785326 1644594783312 2 connected\nb1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615309000 3 connected 10923-16383\n50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 myself,master - 0 1644615308000 1 connected 0-5460\n303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615309538 1 connected\n4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615308025 3 connected\n15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615309740 2 connected 5461-10922\n"}
2022-02-11T21:35:10.108Z    INFO    controller_redis    Number of failed nodes in cluster   {"Request.RedisManager.Namespace": "fut-1", "Request.RedisManager.Name": "redis-cluster-cinema", "Failed Node Count": 1}

redis-operator version: 0.9.0 Does this issue reproduce with the latest release?

What operating system and processor architecture are you using (kubectl version)?

kubectl version Output
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:17:57Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.12-gke.1500", GitCommit:"d32c0db9a3ccd0ac73b0b3abd0532505217b376e", GitTreeState:"clean", BuildDate:"2021-11-17T09:30:02Z", GoVersion:"go1.15.15b5", Compiler:"gc", Platform:"linux/amd64"}

What did you do? Created redis cluster with 3 leaders and 3 followers (1 follower for each leader). Wait some time, one of k8s nodes have been shutdown and one of follower pods have been recreated.

What did you expect to see? After deletion of follower pod, it starts without problems. When operator see failed follower - try to fix it or recreate it.

What did you see instead? Follower pod failed, operator just logged failure and haven't tried to fix this issue

Additional troubleshooting details:

Kubectl pod status: ``` ❯ kubectl -n fut-1 get pod -o wide | grep redis-cluster-cinema redis-cluster-cinema-follower-0 2/2 Running 0 3d10h 10.52.85.166 gke-staging-fin-cluster-cell-a-6682abc2-jrkb redis-cluster-cinema-follower-1 2/2 Running 0 5h9m 10.52.131.160 gke-staging-fin-cluster-main-c6747e06-wg58 redis-cluster-cinema-follower-2 2/2 Running 0 3d10h 10.52.23.245 gke-staging-fin-cluster-main-c6747e06-r6qv redis-cluster-cinema-leader-0 2/2 Running 0 3d10h 10.52.97.6 gke-staging-fin-cluster-main-c6747e06-t26r redis-cluster-cinema-leader-1 2/2 Running 0 3d10h 10.52.137.198 gke-staging-fin-cluster-main-c6747e06-pw7h redis-cluster-cinema-leader-2 2/2 Running 0 3d10h 10.52.105.123 gke-staging-fin-cluster-main-c6747e06-np9t ``` `redis-cli cluster nodes` results from each pod: ``` ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-leader-0 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-leader" out of: redis-cluster-cinema-leader, redis-exporter 89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785326 1644594783312 2 connected b1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615637180 3 connected 10923-16383 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 myself,master - 0 1644615636000 1 connected 0-5460 303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615637079 1 connected 4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615636172 3 connected 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615635669 2 connected 5461-10922 ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-leader-1 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-leader" out of: redis-cluster-cinema-leader, redis-exporter 89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594786078 1644594783552 2 connected 303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615640395 1 connected 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 myself,master - 0 1644615640000 2 connected 5461-10922 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 master - 0 1644615640595 1 connected 0-5460 b1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615639000 3 connected 10923-16383 4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615639388 3 connected ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-leader-2 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-leader" out of: redis-cluster-cinema-leader, redis-exporter 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615642000 2 connected 5461-10922 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 master - 0 1644615642573 1 connected 0-5460 89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785546 1644594783000 2 connected b1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 myself,master - 0 1644615641000 3 connected 10923-16383 4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615643580 3 connected 303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615642573 1 connected ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-follower-0 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-follower" out of: redis-cluster-cinema-follower, redis-exporter b1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615655571 3 connected 10923-16383 89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785762 1644594783241 2 connected 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 master - 0 1644615656074 1 connected 0-5460 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615656577 2 connected 5461-10922 303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 myself,slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615655000 1 connected 4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615656000 3 connected ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-follower-1 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-follower" out of: redis-cluster-cinema-follower, redis-exporter f714b13aea607c08d2dc06999dd82d8f3530faae 10.52.131.160:6379@16379 myself,master - 0 0 0 connected ❯ kubectl -n fut-1 exec -ti redis-cluster-cinema-follower-2 -- redis-cli cluster nodes Defaulted container "redis-cluster-cinema-follower" out of: redis-cluster-cinema-follower, redis-exporter 303dbbc8390230a500489ecedc20b579907880ca 10.52.85.166:6379@16379 slave 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 0 1644615662000 1 connected 50eeb562ce0c51985ca2a6b1c28ac4edca8730ba 10.52.97.6:6379@16379 master - 0 1644615661093 1 connected 0-5460 4d7ca36a0b4f060c31ebbe2fda1575f7b4a5b98b 10.52.23.245:6379@16379 myself,slave b1b08374f915b54a8eeffa544f0487d8b80f19ec 0 1644615661000 3 connected 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 10.52.137.198:6379@16379 master - 0 1644615663005 2 connected 5461-10922 89dcebcad72dca38e085c0fc5e83f63122ab9b4a 10.52.148.213:6379@16379 slave,fail 15d63cd4e8dbc4fbf290f09ee34aa78874dd55fa 1644594785986 1644594783960 2 connected b1b08374f915b54a8eeffa544f0487d8b80f19ec 10.52.105.123:6379@16379 master - 0 1644615661999 3 connected 10923-16383 ``` Logs of problem pod: ``` ❯ kubectl -n fut-1 logs redis-cluster-cinema-follower-1 -c redis-cluster-cinema-follower Redis is running without password which is not recommended sed: /data/nodes.conf: No such file or directory Running without persistence mode Starting redis service in cluster mode..... 11:C 11 Feb 2022 15:53:37.859 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 11:C 11 Feb 2022 15:53:37.859 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=11, just started 11:C 11 Feb 2022 15:53:37.859 # Configuration loaded 11:M 11 Feb 2022 15:53:37.861 * monotonic clock: POSIX clock_gettime 11:M 11 Feb 2022 15:53:37.862 * No cluster configuration found, I'm f714b13aea607c08d2dc06999dd82d8f3530faae 11:M 11 Feb 2022 15:53:37.867 * Running mode=cluster, port=6379. 11:M 11 Feb 2022 15:53:37.867 # Server initialized 11:M 11 Feb 2022 15:53:37.868 * Ready to accept connections ```
iamabhishek-dubey commented 2 years ago

Can you please show me the manifest which you are applying?

dm3ch commented 2 years ago

Here's a dump of redis-cluster yaml from k8s:

❯ kubectl -n fut-1 get rediscluster redis-cluster-cinema -o yaml
apiVersion: redis.redis.opstreelabs.in/v1beta1
kind: RedisCluster
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"redis.redis.opstreelabs.in/v1beta1","kind":"RedisCluster","metadata":{"annotations":{"meta.helm.sh/release-name":"redis-cluster-cinema","meta.helm.sh/release-namespace":"fut-1"},"creationTimestamp":"2022-01-25T03:35:39Z","generation":1,"labels":{"app.kubernetes.io/component":"middleware","app.kubernetes.io/instance":"redis-cluster-cinema","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"redis-cluster-cinema","app.kubernetes.io/version":"0.8.0","helm.sh/chart":"redis-cluster-0.8.0"},"name":"redis-cluster-cinema","namespace":"fut-1","resourceVersion":"1337191080","uid":"199077d1-e436-409c-81ef-94e3ee38876d"},"spec":{"clusterSize":3,"kubernetesConfig":{"image":"quay.io/opstree/redis:v6.2.5","imagePullPolicy":"IfNotPresent","resources":{"limits":{"cpu":"1000m","memory":"1200Mi"},"requests":{"cpu":"100m","memory":"1024Mi"}},"serviceType":"ClusterIP"},"redisExporter":{"enabled":true,"image":"quay.io/opstree/redis-exporter:1.0","imagePullPolicy":"IfNotPresent","resources":{"limits":{"cpu":"100m","memory":"128Mi"},"requests":{"cpu":"100m","memory":"128Mi"}}},"redisFollower":{"serviceType":"ClusterIP"},"redisLeader":{"serviceType":"ClusterIP"}}}
    meta.helm.sh/release-name: redis-cluster-cinema
    meta.helm.sh/release-namespace: fut-1
  creationTimestamp: "2022-02-08T10:52:15Z"
  generation: 1
  labels:
    app.kubernetes.io/component: middleware
    app.kubernetes.io/instance: redis-cluster-cinema
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: redis-cluster-cinema
    app.kubernetes.io/version: 0.8.0
    helm.sh/chart: redis-cluster-0.8.0
  name: redis-cluster-cinema
  namespace: fut-1
  resourceVersion: "1390121553"
  uid: af45139d-7411-47fd-a00f-5b882587ff8e
spec:
  clusterSize: 3
  kubernetesConfig:
    image: quay.io/opstree/redis:v6.2.5
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 1000m
        memory: 1200Mi
      requests:
        cpu: 100m
        memory: 1024Mi
    serviceType: ClusterIP
  redisExporter:
    enabled: true
    image: quay.io/opstree/redis-exporter:1.0
    imagePullPolicy: IfNotPresent
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
  redisFollower:
    serviceType: ClusterIP
  redisLeader:
    serviceType: ClusterIP
nikitachrs commented 2 years ago

@iamabhishek-dubey ping (: Have a same problem.

iamabhishek-dubey commented 2 years ago

So the problem is that we need to persist, nodes.conf file which is generated by redis. So if we want to use the redis cluster, in that case, we have to attach a minimal storage PVC to the stateful set as shown in the example. Maybe I will create a story for validation that storageSpec should be defined.

dm3ch commented 2 years ago

@iamabhishek-dubey We have tested both redis clusters with and without PVC and they are all affected by descirbed problem.

nodes.conf contains pod IP addresses which are changed when pod is recreated, so just persisting nodes.conf wouldn't work if I right understand.

I am not 100% sure, but I believe that the right approach is managing nodes list from opearator - operator should contact each node, check if nodes list contains all existing nodes (and add them if not nodes are known) and delete nodes that doesn't actually exist anymore

P.S. Also it could be workarounded by cerating of non-headless service for each pod, which would allow connecting to the pod via IP that wouldn't change after recreation.