bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9k stars 9.21k forks source link

Sentinels are not working in failover case #3472

Closed mahanam closed 4 years ago

mahanam commented 4 years ago

NAME: redis-ha REVISION: 1 RELEASED: Wed Aug 19 19:04:31 2020 CHART: redis-10.7.16 USER-SUPPLIED VALUES: {}

Actually i triggered my redis-go client towards my setup => [single master {redis+sentinel} and 3 slaves{redis+sentinel}]

  1. The first time it's working fine it's connected to the right Redis master.
  2. Now after some time I have deleted the current master pod {redis+sentinel}
  3. After failover, the client able to pick the right master at this time.
  4. Now after some time I have deleted the master pod {redis+sentinel}
  5. But here, the client stuck into the loop with the old IP address which is a stale entry.

  6. Magic comes here, It will connect to the right Redis master when you restart your client. While in this process, it reconnects the appropriate master after a client restart

  7. After (6) point have killed the running and working client just to check whether it will work for next time.

  8. When i start the client again it stucks with stale ip address as mentioned in (5) point

Logs: 2020/08/19 19:13:07 Initializing the redis client... 2020/08/19 19:13:07 Getting the sentinel redis master... redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="7a732b2572c10a61e22e6e79885d1286b943669d" for master="mymaster" redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="852f11a9d3634574e8786838ce5fb26c43f6bec1" for master="mymaster" redis: 2020/08/19 19:13:07 sentinel.go:470: sentinel: discovered new sentinel="3301ae7a855f25d2a6b0085a879c3609be823e72" for master="mymaster" redis: 2020/08/19 19:13:07 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.7:6379" 2020/08/19 19:13:07 Successfully connected to the redis master server:[10.233.13.207:26379] 2020/08/19 19:13:07 Retrieved key 0:0 2020/08/19 19:13:08 Retrieved key 1:1 2020/08/19 19:13:09 Retrieved key 2:2 2020/08/19 19:13:10 Retrieved key 3:3 2020/08/19 19:13:11 Retrieved key 4:4 2020/08/19 19:13:12 Retrieved key 5:5 2020/08/19 19:13:13 Retrieved key 6:6 2020/08/19 19:13:14 Retrieved key 7:7 2020/08/19 19:13:15 Retrieved key 8:8 2020/08/19 19:13:16 Retrieved key 9:9 2020/08/19 19:13:17 Retrieved key 10:10 2020/08/19 19:13:18 Retrieved key 11:11 2020/08/19 19:13:19 Retrieved key 12:12 2020/08/19 19:13:20 Retrieved key 13:13 2020/08/19 19:13:21 Retrieved key 14:14 2020/08/19 19:13:22 Retrieved key 15:15 2020/08/19 19:13:25 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:27 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:27 Retrieved key 16: 2020/08/19 19:13:29 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:30 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:30 Retrieved key 17: 2020/08/19 19:13:33 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:35 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:35 Retrieved key 18: 2020/08/19 19:13:38 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:40 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:40 Retrieved key 19: 2020/08/19 19:13:43 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:44 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:44 Retrieved key 20: 2020/08/19 19:13:47 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:49 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:49 Retrieved key 21: 2020/08/19 19:13:51 Write Errordial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:52 dial tcp 10.233.77.7:6379: connect: invalid argument 2020/08/19 19:13:52 Retrieved key 22: redis: 2020/08/19 19:13:53 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379" 2020/08/19 19:13:53 Retrieved key 23:23 2020/08/19 19:13:54 Retrieved key 24:24 2020/08/19 19:13:55 Retrieved key 25:25 2020/08/19 19:13:56 Retrieved key 26:26 2020/08/19 19:13:57 Retrieved key 27:27 2020/08/19 19:13:58 Retrieved key 28:28 ... 2020/08/19 19:15:11 Retrieved key 100:100 2020/08/19 19:15:12 Retrieved key 101:101 redis: 2020/08/19 19:15:13 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.12:6379" 2020/08/19 19:15:13 Retrieved key 102:102 2020/08/19 19:15:14 Retrieved key 103:103 2020/08/19 19:15:15 Retrieved key 104:104 2020/08/19 19:15:16 Retrieved key 105:105 ... 2020/08/19 19:16:47 Retrieved key 195:195 redis: 2020/08/19 19:16:48 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379" 2020/08/19 19:16:50 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:16:52 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:16:52 Retrieved key 196: 2020/08/19 19:16:54 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:16:56 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:16:56 Retrieved key 197: 2020/08/19 19:16:58 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:00 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:00 Retrieved key 198: 2020/08/19 19:17:02 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:03 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:03 Retrieved key 199:

2020/08/19 19:17:05 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:07 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:07 Retrieved key 200: 2020/08/19 19:17:09 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:10 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:10 Retrieved key 201: 2020/08/19 19:17:13 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:14 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:14 Retrieved key 202: 2020/08/19 19:17:17 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:18 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:18 Retrieved key 203: 2020/08/19 19:17:21 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:23 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:23 Retrieved key 204: 2020/08/19 19:17:26 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:27 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:27 Retrieved key 205: 2020/08/19 19:17:30 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:31 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:31 Retrieved key 206: 2020/08/19 19:17:34 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:35 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:35 Retrieved key 207: 2020/08/19 19:17:37 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:38 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:38 Retrieved key 208: 2020/08/19 19:17:41 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:43 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:43 Retrieved key 209: 2020/08/19 19:17:46 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:48 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:48 Retrieved key 210: 2020/08/19 19:17:50 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:51 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:51 Retrieved key 211: 2020/08/19 19:17:54 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:56 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:56 Retrieved key 212: 2020/08/19 19:17:58 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:59 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:17:59 Retrieved key 213: 2020/08/19 19:18:02 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:04 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:04 Retrieved key 214: 2020/08/19 19:18:06 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:08 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:08 Retrieved key 215: 2020/08/19 19:18:10 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:12 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:12 Retrieved key 216: 2020/08/19 19:18:14 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:17 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:17 Retrieved key 217: 2020/08/19 19:18:19 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:20 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:20 Retrieved key 218: 2020/08/19 19:18:22 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:24 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:18:24 Retrieved key 219:

=============================================================================== 2020/08/19 19:18:58 Initializing the redis client... 2020/08/19 19:18:58 Getting the sentinel redis master... redis: 2020/08/19 19:18:58 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.13:6379" 2020/08/19 19:18:58 Successfully connected to the redis master server:[10.233.13.207:26379] 2020/08/19 19:18:58 Retrieved key 0:0 2020/08/19 19:18:59 Retrieved key 1:1 2020/08/19 19:19:00 Retrieved key 2:2 2020/08/19 19:19:01 Retrieved key 3:3 2020/08/19 19:19:02 Retrieved key 4:4 2020/08/19 19:19:03 Retrieved key 5:5 2020/08/19 19:19:04 Retrieved key 6:6 2020/08/19 19:19:05 Retrieved key 7:7 2020/08/19 19:19:06 Retrieved key 8:8 2020/08/19 19:19:07 Retrieved key 9:9

============================================================================================= 2020/08/19 19:20:20 Initializing the redis client... 2020/08/19 19:20:20 Getting the sentinel redis master... redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="852f11a9d3634574e8786838ce5fb26c43f6bec1" for master="mymaster" redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="3301ae7a855f25d2a6b0085a879c3609be823e72" for master="mymaster" redis: 2020/08/19 19:20:20 sentinel.go:470: sentinel: discovered new sentinel="57bff9e9e2cca87c135ec4eefc146a08f4654b9d" for master="mymaster" redis: 2020/08/19 19:20:20 sentinel.go:438: sentinel: new master="mymaster" addr="10.233.77.11:6379" 2020/08/19 19:20:22 Connection failed with redis master serverdial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:22 Successfully connected to the redis master server:[10.233.13.207:26379] 2020/08/19 19:20:24 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:25 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:25 Retrieved key 0: 2020/08/19 19:20:27 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:29 dial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:29 Retrieved key 1: 2020/08/19 19:20:32 Write Errordial tcp 10.233.77.11:6379: connect: invalid argument 2020/08/19 19:20:34 dial tcp 10.233.77.11:6379: connect: invalid argument 2

Expected behavior If you delete the current master pod the client should be able to connect the right redis master after failover window

Version of Helm and Kubernetes: demo@demo1:~$ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:20:25Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

demo@demo1:~$ helm version Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Configuration of debug manifest files:

[debug] Created tunnel using local port: '35084'

[debug] SERVER: "127.0.0.1:35084"

Release "redis-ha" does not exist. Installing it now. [debug] CHART PATH: /home/demo/git/sdpon-manifest/templates/sdponcharts/charts/voltha/charts/redis-ha

NAME: redis-ha REVISION: 1 RELEASED: Wed Aug 19 19:04:31 2020 CHART: redis-10.7.16 USER-SUPPLIED VALUES: {}

COMPUTED VALUES: cluster: enabled: true slaveCount: 3 clusterDomain: cluster.local configmap: |-

Enable AOF https://redis.io/topics/persistence#append-only-file

appendonly yes appendfsync everysec

no-appendfsync-on-rewrite no

save 900 1

save 300 10

save 60 10000

Disable RDB persistence, AOF persistence already enabled.

save "" global: redis: {} image: pullPolicy: IfNotPresent registry: docker-registry.com:5000 repository: redis tag: 6.0.6 master: affinity: {} command: redis-server configmap: null customLivenessProbe: {} customReadinessProbe: {} extraFlags: [] livenessProbe: enabled: true failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 persistence: accessModes:

HOOKS: MANIFEST:


Source: redis/templates/networkpolicy.yaml

kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: redis-ha namespace: default labels: app: redis chart: redis-10.7.16 release: redis-ha heritage: Tiller spec: podSelector: matchLabels: app: redis release: redis-ha policyTypes:

RESOURCES: ==> v1/ConfigMap NAME AGE redis-ha 0s redis-ha-health 0s

==> v1/NetworkPolicy NAME AGE redis-ha 0s

==> v1/PersistentVolume NAME AGE redis-data-redis-ha-master-0 0s redis-data-redis-ha-slave-0 0s redis-data-redis-ha-slave-1 0s redis-data-redis-ha-slave-2 0s

==> v1/PersistentVolumeClaim NAME AGE redis-data-redis-ha-master-0 0s redis-data-redis-ha-slave-0 0s redis-data-redis-ha-slave-1 0s redis-data-redis-ha-slave-2 0s

==> v1/Pod(related) NAME AGE redis-ha-master-0 0s redis-ha-slave-0 0s

==> v1/Secret NAME AGE redis-ha 0s

==> v1/Service NAME AGE redis-ha 0s redis-ha-headless 0s

==> v1/StatefulSet NAME AGE redis-ha-master 0s redis-ha-slave 0s

NOTES: Please be patient while the chart is being deployed Redis can be accessed via port 6379 on the following DNS name from within your cluster:

redis-ha.default.svc.cluster.local for read only operations

For read/write operations, first access the Redis Sentinel cluster, which is available in port 26379 using the same domain name above.

Note: Since NetworkPolicy is enabled, only pods with label redis-ha-client=true" will be able to connect to redis.

javsalgar commented 4 years ago

Hi,

It seems that you are using the redis-ha chart. Note that this one is not maintained by Bitnami. We are maintaining the redis and redis-cluster charts. I advise you to contact the redis-ha maintainers with the issue.

mahanam commented 4 years ago

Hi,

Its the Redis charts from bitnami charts, I have used just the naming redis-ha.

KIndly check out the full configuration by which the redis sentinel based ha is deployed.

Thanks, Mahadev

javsalgar commented 4 years ago

Hi,

Sorry I misread the yaml. Could you add proper code blocks so it's more readable?

Regarding the client, do you have the go code you are using for testing? I would like to reproduce the exact issue.

mahanam commented 4 years ago

Hi,

Please find the snippet of my current client.

if config.RedisSentinel { log.Print("Getting the sentinel redis master...") RClient = redis.NewFailoverClient(&redis.FailoverOptions{ MasterName: "mymaster", SentinelAddrs: []string{"10.233.13.207:26379"}, SentinelPassword: "iFFRAhsXq0", Password: "iFFRAhsXq0", MaxRetries: 10, DB: 0, }) } else { log.Print("Getting the redis client...") RClient = redis.NewClient(&redis.Options{ Addr: config.RedisAddr, Password: config.RedisPassword, DB: config.RedisDB, }) }

err := RClient.Ping(ctx).Err() if err != nil { log.Print("Connection failed with redis master server", err) }

log.Print("Successfully connected to the redis master server:", config.RedisSentinelAddrs)

Thanks, Mahadev

javsalgar commented 4 years ago

Hi,

I imagine this will require importing some libraries and executing it with a set of commands. Could you provide a link to a github repo that I can clone, execute make and do the testing? Sorry for the inconvenience but it would be very helpful for the engineering team.

mahanam commented 4 years ago

Hi,

We are not using any kind of git repo and its same code, which we are using as a client. Could you help me with the required libraries as you suggested?

Thanks, Mahadev

javsalgar commented 4 years ago

Hi,

I imagine that, in your code, the use of redis. and other parts is thank to some includes in your code. Am i right? That's what I would like to know

mahanam commented 4 years ago

Hi,

package main

import (
    "context"
    "time"
    "log"
    "strconv"
    "github.com/go-redis/redis/v8"
)

var (
        ctx = context.Background()
    RClient *redis.Client
)

type Config struct {
    RedisSentinelAddrs []string
    RedisAddr            string
    RedisMasterName      string
    RedisPassword        string
    RedisDB              int
    RedisSentinel        bool
}

func main() {
    config := &Config{
        RedisSentinelAddrs: []string{"redis-ha:26379"},
        RedisAddr: "",
        RedisMasterName: "mymaster",
        RedisPassword: "OwIyBdHKx0",
        RedisDB: 0,
        RedisSentinel: true,
    }
    RClient := NewRedisClient(config)

    var i int = 0
    for true {
        s1 := strconv.Itoa(i)
        err := RClient.Set(ctx, s1 , i , 0).Err()
        if err != nil {
            log.Print("Write Error", err)
        }
        val, err := RClient.Get(ctx, s1).Result()
        if err != nil {
            log.Print(err)
        }
        log.Print("Retrieved key ", s1, ":", val)
        time.Sleep(time.Second)
        i++
    }

}

func NewRedisClient(config *Config) *redis.Client {
    var client *redis.Client
    if config.RedisSentinel {
        client = redis.NewFailoverClient(&redis.FailoverOptions{
            MasterName:    "mymaster",
            SentinelAddrs: []string{"redis-ha:26379"},
            SentinelPassword: "OwIyBdHKx0",
            Password: "OwIyBdHKx0",
            MaxRetries:       10,
            DB: 0,
        })
    } else {
        client = redis.NewClient(&redis.Options{
            Addr:     config.RedisAddr,
            Password: config.RedisPassword,
            DB:       config.RedisDB,
        })
    }

    _, err := client.Ping(ctx).Result()
    if err != nil {
        log.Fatal("COnnection failed with redis master server", err)
    }

    log.Print("Successfully connected to the redis master server:", config.RedisSentinelAddrs)

    return client

}
javsalgar commented 4 years ago

Hi,

Just a quick note that I was able to reproduce a failover error issue. I will forward this to the engineering team so we can work on a fix. As soon as I have more news, we will update this ticket.

mahanam commented 4 years ago

Hi,

Thanks for the confirmation on this issue.

Thanks, Mahadev

mahanam commented 4 years ago

Can I hear some noise on this? :)

javsalgar commented 4 years ago

Hi,

We have planned working on this during the following weeks, as soon as there are more news, we will update the ticket.

mahanam commented 4 years ago

Great! Thanks

rafariossaa commented 4 years ago

Hi @mahanam , I am working on this, could you indicate in which kubernetes cluster did this happen to you ?

mahanam commented 4 years ago

Hi @rafariossaa

It is basically a 3 node setup on which am deploying my redis-ha but the same was reproduced on a single node as well.

Let me know if you need more details regarding the system. Thanks, Mahadev

mahanam commented 4 years ago

Hi,

I hope your queries got answered in the previous comment, Any possibility to update on the status as it's still in the hold state.

Thanks Mahadev

rafariossaa commented 4 years ago

Hi, We are currently working on this. The hold state is mainly to avoid this issue to be marked as stale and auto-closed. There is not ETA, but I hope this to be solved soon.

mahanam commented 4 years ago

Hi,

I have tried this fix from this branch "rafariossaa:working_redis_fix" and its the same as previous issue.

Please find the simple steps below which I have followed for your reference. Is this bug fix tested or am I misconfigured something?

  1. I have deployed a fresh redis-ha NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES redis-master-node-0 2/2 Running 0 2m30s 10.233.121.246 localhost redis-master-node-1 2/2 Running 0 116s 10.233.121.249 localhost redis-master-node-2 2/2 Running 0 104s 10.233.121.250 localhost

  2. deleted the master pod 10.233.121.246

  3. Now at this point, other slave [10.233.121.250] became a new master as expected

  4. but the problem here is the deleted master pod came up with new ip [10.233.121.251] and when i see the logs inside that redis container, its trying for the connect the old master. 20:S 17 Sep 2020 07:10:36.234 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:10:36.234 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:11:37.651 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:11:37.651 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:11:37.651 MASTER <-> REPLICA sync started

=======================================================

""For more logs,""

sentinel logs of the new pod [10.233.121.251] admin@localhost:/mnt/onl/sdpon/templates/sdponcharts/charts/redis-ha$ docker logs 605cf63fab95 11:X 17 Sep 2020 07:03:29.380 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 11:X 17 Sep 2020 07:03:29.380 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=11, just started 11:X 17 Sep 2020 07:03:29.380 # Configuration loaded 11:X 17 Sep 2020 07:03:29.382 * Running mode=sentinel, port=26379. 11:X 17 Sep 2020 07:03:29.382 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 11:X 17 Sep 2020 07:03:29.386 # Sentinel ID is 7df5adeca0eec70d960e5f930e2cb45c3d021529 11:X 17 Sep 2020 07:03:29.386 # +monitor master mymaster 10.233.121.246 6379 quorum 2 11:X 17 Sep 2020 07:04:29.407 # +sdown master mymaster 10.233.121.246 6379

redis logs of the new pod [10.233.121.251] 20:C 17 Sep 2020 07:03:29.073 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 20:C 17 Sep 2020 07:03:29.073 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=20, just started 20:C 17 Sep 2020 07:03:29.073 # Configuration loaded 20:S 17 Sep 2020 07:03:29.075 Running mode=standalone, port=6379. 20:S 17 Sep 2020 07:03:29.076 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 20:S 17 Sep 2020 07:03:29.076 # Server initialized 20:S 17 Sep 2020 07:03:29.076 Reading RDB preamble from AOF file... 20:S 17 Sep 2020 07:03:29.077 Loading RDB produced by version 6.0.8 20:S 17 Sep 2020 07:03:29.077 RDB age 209 seconds 20:S 17 Sep 2020 07:03:29.077 RDB memory usage when created 94.77 Mb 20:S 17 Sep 2020 07:03:29.077 RDB has an AOF tail 20:S 17 Sep 2020 07:03:29.413 Reading the remaining AOF tail... 20:S 17 Sep 2020 07:03:29.413 DB loaded from append only file: 0.336 seconds 20:S 17 Sep 2020 07:03:29.413 Ready to accept connections 20:S 17 Sep 2020 07:03:29.413 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:03:29.413 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:04:30.803 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:04:30.803 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:04:30.803 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:05:31.172 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:05:31.172 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:05:31.172 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:06:32.561 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:06:32.561 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:06:32.561 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:07:33.969 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:07:33.969 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:07:33.969 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:08:34.385 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:08:34.385 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:08:34.385 MASTER <-> REPLICA sync started 20:S 17 Sep 2020 07:09:35.797 # Timeout connecting to the MASTER... 20:S 17 Sep 2020 07:09:35.797 Connecting to MASTER 10.233.121.246:6379 20:S 17 Sep 2020 07:09:35.797 * MASTER <-> REPLICA sync started

=================================================

Configuration made in values-production.yaml these are only Highlighted changes made:

command: helm upgrade --install redis-master -f values-production.yaml . --debug

Sentinel: enabled: true usePassword: false

networkPolicy: enabled: false

usePassword: false

securityContext: runAsUser: 0

command: "redis-server"

disableCommands

- FLUSHDB

- FLUSHALL

command: "redis-server" #for slave

metrics: enabled: false

configmap: |-

Enable AOF https://redis.io/topics/persistence#append-only-file

appendonly yes
appendfsync everysec

===========================================================

Complete configuration:

[debug] Created tunnel using local port: '43627'

[debug] SERVER: "127.0.0.1:43627"

Release "redis-master" does not exist. Installing it now. [debug] CHART PATH: /home/admin/redis-work/redis-ha

NAME: redis-master REVISION: 1 RELEASED: Thu Sep 17 07:37:59 2020 CHART: redis-11.0.0 USER-SUPPLIED VALUES: cluster: enabled: true slaveCount: 3 clusterDomain: cluster.local configmap: |-

Enable AOF https://redis.io/topics/persistence#append-only-file

appendonly yes appendfsync everysec

no-appendfsync-on-rewrite no

save 900 1

save 300 10

save 60 10000

Disable RDB persistence, AOF persistence already enabled.

save "" global: redis: {} image: pullPolicy: IfNotPresent registry: docker.io repository: bitnami/redis tag: 6.0.8-debian-10-r0 master: affinity: {} command: redis-server configmap: null customLivenessProbe: {} customReadinessProbe: {} disableCommands: null extraFlags: [] livenessProbe: enabled: true failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 persistence: accessModes:

COMPUTED VALUES: cluster: enabled: true slaveCount: 3 clusterDomain: cluster.local configmap: |-

Enable AOF https://redis.io/topics/persistence#append-only-file

appendonly yes appendfsync everysec

no-appendfsync-on-rewrite no

save 900 1

save 300 10

save 60 10000

Disable RDB persistence, AOF persistence already enabled.

save "" global: redis: {} image: pullPolicy: IfNotPresent registry: docker.io repository: bitnami/redis tag: 6.0.8-debian-10-r0 master: affinity: {} command: redis-server customLivenessProbe: {} customReadinessProbe: {} extraEnvVars: [] extraEnvVarsCM: [] extraEnvVarsSecret: [] extraFlags: [] livenessProbe: enabled: true failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 persistence: accessModes:

HOOKS: MANIFEST:


Source: redis/templates/configmap-scripts.yaml

apiVersion: v1 kind: ConfigMap metadata: name: redis-master-scripts namespace: default labels: app: redis chart: redis-11.0.0 heritage: Tiller release: redis-master data: start-node.sh: |

!/bin/bash

is_boolean_yes() {
    local -r bool="${1:-}"
    # comparison is performed without regard to the case of alphabetic characters
    shopt -s nocasematch
    if [[ "$bool" = 1 || "$bool" =~ ^(yes|true)$ ]]; then
        true
    else
        false
    fi
}

export REDIS_REPLICATION_MODE="slave"
if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
  if [[ ${BASH_REMATCH[2]} == "0" ]]; then
    if [[ ! -f /data/redisboot.lock ]]; then
      export REDIS_REPLICATION_MODE="master"
    else
      if is_boolean_yes "$REDIS_TLS_ENABLED"; then
        sentinel_info_command="redis-cli -a $REDIS_PASSWORD -h redis-master-headless.default.svc.cluster.local -p 26379 --tls --cert ${REDIS_TLS_CERT_FILE} --key ${REDIS_TLS_KEY_FILE} --cacert ${REDIS_TLS_CA_FILE} info"
      else
        sentinel_info_command="redis-cli -h redis-master-headless.default.svc.cluster.local -p 26379 info"
      fi
      if [[ ! ($($sentinel_info_command)) ]]; then
         export REDIS_REPLICATION_MODE="master"
         rm /data/redisboot.lock
      fi
    fi
  fi
fi
useradd redis
chown -R redis /data

if [[ -n $REDIS_PASSWORD_FILE ]]; then
  password_aux=`cat ${REDIS_PASSWORD_FILE}`
  export REDIS_PASSWORD=$password_aux
fi

if [[ -n $REDIS_MASTER_PASSWORD_FILE ]]; then
  password_aux=`cat ${REDIS_MASTER_PASSWORD_FILE}`
  export REDIS_MASTER_PASSWORD=$password_aux
fi

if [[ "$REDIS_REPLICATION_MODE" == "master" ]]; then
  echo "I am master"
  if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
    cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
  fi
else
  if [[ ! -f /opt/bitnami/redis/etc/replica.conf ]];then
    cp /opt/bitnami/redis/mounted-etc/replica.conf /opt/bitnami/redis/etc/replica.conf
  fi

  if is_boolean_yes "$REDIS_TLS_ENABLED"; then
    sentinel_info_command="timeout -s 5 10 redis-cli -h redis-master-headless.default.svc.cluster.local -p 26379 --tls --cert ${REDIS_TLS_CERT_FILE} --key ${REDIS_TLS_KEY_FILE} --cacert ${REDIS_TLS_CA_FILE} sentinel get-master-addr-by-name mymaster"
  else
    sentinel_info_command="timeout -s 5 10 redis-cli -h redis-master-headless.default.svc.cluster.local -p 26379 sentinel get-master-addr-by-name mymaster"
  fi
  REDIS_SENTINEL_INFO=($($sentinel_info_command))
  REDIS_MASTER_HOST=${REDIS_SENTINEL_INFO[0]}
  REDIS_MASTER_PORT_NUMBER=${REDIS_SENTINEL_INFO[1]}
fi

if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
  cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
ARGS=("--port" "${REDIS_PORT}")

if [[ "$REDIS_REPLICATION_MODE" == "slave" ]]; then
  ARGS+=("--slaveof" "${REDIS_MASTER_HOST}" "${REDIS_MASTER_PORT_NUMBER}")
fi
ARGS+=("--protected-mode" "no")

if [[ "$REDIS_REPLICATION_MODE" == "master" ]]; then
  ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
else
  ARGS+=("--include" "/opt/bitnami/redis/etc/replica.conf")
fi

ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")

touch /data/redisboot.lock
redis-server "${ARGS[@]}"

start-sentinel.sh: |

!/bin/bash

replace_in_file() {
    local filename="${1:?filename is required}"
    local match_regex="${2:?match regex is required}"
    local substitute_regex="${3:?substitute regex is required}"
    local posix_regex=${4:-true}

    local result

    # We should avoid using 'sed in-place' substitutions
    # 1) They are not compatible with files mounted from ConfigMap(s)
    # 2) We found incompatibility issues with Debian10 and "in-place" substitutions
    del=$'\001' # Use a non-printable character as a 'sed' delimiter to avoid issues
    if [[ $posix_regex = true ]]; then
        result="$(sed -E "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"
    else
        result="$(sed "s${del}${match_regex}${del}${substitute_regex}${del}g" "$filename")"
    fi
    echo "$result" > "$filename"
}
sentinel_conf_set() {
    local -r key="${1:?missing key}"
    local value="${2:-}"

    # Sanitize inputs
    value="${value//\\/\\\\}"
    value="${value//&/\\&}"
    value="${value//\?/\\?}"
    [[ "$value" = "" ]] && value="\"$value\""

    replace_in_file "/opt/bitnami/redis-sentinel/etc/sentinel.conf" "^#*\s*${key} .*" "${key} ${value}" false
}
is_boolean_yes() {
    local -r bool="${1:-}"
    # comparison is performed without regard to the case of alphabetic characters
    shopt -s nocasematch
    if [[ "$bool" = 1 || "$bool" =~ ^(yes|true)$ ]]; then
        true
    else
        false
    fi
}

if [[ -n $REDIS_PASSWORD_FILE ]]; then
  password_aux=`cat ${REDIS_PASSWORD_FILE}`
  export REDIS_PASSWORD=$password_aux
fi

if [[ ! -f /opt/bitnami/redis-sentinel/etc/sentinel.conf ]]; then
  cp /opt/bitnami/redis-sentinel/mounted-etc/sentinel.conf /opt/bitnami/redis-sentinel/etc/sentinel.conf
fi

export REDIS_REPLICATION_MODE="slave"
if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then
  if [[ ${BASH_REMATCH[2]} == "0" ]]; then
    if [[ ! -f /data/sentinelboot.lock ]]; then
      export REDIS_REPLICATION_MODE="master"
    else
      if is_boolean_yes "$REDIS_TLS_ENABLED"; then
        sentinel_info_command="redis-cli -a $REDIS_PASSWORD -h redis-master-headless.default.svc.cluster.local -p 26379 --tls --cert ${REDIS_TLS_CERT_FILE} --key ${REDIS_TLS_KEY_FILE} --cacert ${REDIS_TLS_CA_FILE} info"
      else
        sentinel_info_command="redis-cli -h redis-master-headless.default.svc.cluster.local -p 26379 info"
      fi
      if [[ ! ($($sentinel_info_command)) ]]; then
         export REDIS_REPLICATION_MODE="master"
         rm /data/sentinelboot.lock
      fi
    fi
  fi
fi

if [[ "$REDIS_REPLICATION_MODE" == "master" ]]; then
  sentinel_conf_set "sentinel monitor" "mymaster redis-master-node-0.redis-master-headless.default.svc.cluster.local 6379 2"
else
  if is_boolean_yes "$REDIS_TLS_ENABLED"; then
    sentinel_info_command="redis-cli -a $REDIS_PASSWORD -h redis-master-headless.default.svc.cluster.local -p 26379 --tls --cert ${REDIS_TLS_CERT_FILE} --key ${REDIS_TLS_KEY_FILE} --cacert ${REDIS_TLS_CA_FILE} sentinel get-master-addr-by-name mymaster"
  else
    sentinel_info_command="redis-cli -h redis-master-headless.default.svc.cluster.local -p 26379 sentinel get-master-addr-by-name mymaster"
  fi
  REDIS_SENTINEL_INFO=($($sentinel_info_command))
  REDIS_MASTER_HOST=${REDIS_SENTINEL_INFO[0]}
  REDIS_MASTER_PORT_NUMBER=${REDIS_SENTINEL_INFO[1]}

  sentinel_conf_set "sentinel monitor" "mymaster "$REDIS_MASTER_HOST" "$REDIS_MASTER_PORT_NUMBER" 2"
fi
touch /data/sentinelboot.lock
redis-server /opt/bitnami/redis-sentinel/etc/sentinel.conf --sentinel

Source: redis/templates/configmap.yaml

apiVersion: v1 kind: ConfigMap metadata: name: redis-master namespace: default labels: app: redis chart: redis-11.0.0 heritage: Tiller release: redis-master data: redis.conf: |-

User-supplied configuration:

# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
appendfsync everysec
#no-appendfsync-on-rewrite no
#save 900 1
#save 300 10
#save 60 10000
# Disable RDB persistence, AOF persistence already enabled.
save ""

master.conf: |- dir /data replica.conf: |- dir /data slave-read-only yes rename-command FLUSHDB "" rename-command FLUSHALL "" sentinel.conf: |- dir "/tmp" bind 0.0.0.0 port 26379 sentinel monitor mymaster redis-master-node-0.redis-master-headless.default.svc.cluster.local 6379 2 sentinel down-after-milliseconds mymaster 60000 sentinel failover-timeout mymaster 18000 sentinel parallel-syncs mymaster 1

Source: redis/templates/health-configmap.yaml

apiVersion: v1 kind: ConfigMap metadata: name: redis-master-health namespace: default labels: app: redis chart: redis-11.0.0 heritage: Tiller release: redis-master data: ping_readiness_local.sh: |-

!/bin/bash

response=$(
  timeout -s 3 $1 \
  redis-cli \
    -h localhost \
    -p $REDIS_PORT \
    ping
)
if [ "$response" != "PONG" ]; then
  echo "$response"
  exit 1
fi

ping_liveness_local.sh: |-

!/bin/bash

response=$(
  timeout -s 3 $1 \
  redis-cli \
    -h localhost \
    -p $REDIS_PORT \
    ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
  echo "$response"
  exit 1
fi

ping_sentinel.sh: |-

!/bin/bash

 response=$(
  timeout -s 3 $1 \
  redis-cli \
    -h localhost \
    -p $REDIS_SENTINEL_PORT \
    ping
)
if [ "$response" != "PONG" ]; then
  echo "$response"
  exit 1
fi

parse_sentinels.awk: |- /ip/ {FOUND_IP=1} /port/ {FOUND_PORT=1} /runid/ {FOUND_RUNID=1} !/ip|port|runid/ { if (FOUND_IP==1) { IP=$1; FOUND_IP=0; } else if (FOUND_PORT==1) { PORT=$1; FOUND_PORT=0; } else if (FOUND_RUNID==1) { printf "\nsentinel known-sentinel mymaster %s %s %s", IP, PORT, $0; FOUND_RUNID=0; } } ping_readiness_master.sh: |-

!/bin/bash

 response=$(
  timeout -s 3 $1 \
  redis-cli \
    -h $REDIS_MASTER_HOST \
    -p $REDIS_MASTER_PORT_NUMBER \
    ping
)
if [ "$response" != "PONG" ]; then
  echo "$response"
  exit 1
fi

ping_liveness_master.sh: |-

!/bin/bash

response=$(
  timeout -s 3 $1 \
  redis-cli \
    -h $REDIS_MASTER_HOST \
    -p $REDIS_MASTER_PORT_NUMBER \
    ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
  echo "$response"
  exit 1
fi

ping_readiness_local_and_master.sh: |- script_dir="$(dirname "$0")" exit_status=0 "$script_dir/ping_readiness_local.sh" $1 || exit_status=$? "$script_dir/ping_readiness_master.sh" $1 || exit_status=$? exit $exit_status ping_liveness_local_and_master.sh: |- script_dir="$(dirname "$0")" exit_status=0 "$script_dir/ping_liveness_local.sh" $1 || exit_status=$? "$script_dir/ping_liveness_master.sh" $1 || exit_status=$? exit $exit_status

Source: redis/templates/pvc.yaml

apiVersion: v1 kind: PersistentVolume metadata: name: redis-data-redis-master-node-3 labels: type: local spec: capacity: storage: 2Gi accessModes:

RESOURCES: ==> v1/ConfigMap NAME AGE redis-master 0s redis-master-health 0s redis-master-scripts 0s

==> v1/PersistentVolume NAME AGE redis-data-redis-master-node-0 0s redis-data-redis-master-node-1 0s redis-data-redis-master-node-2 0s redis-data-redis-master-node-3 0s

==> v1/PersistentVolumeClaim NAME AGE redis-data-redis-master-node-0 0s redis-data-redis-master-node-1 0s redis-data-redis-master-node-2 0s redis-data-redis-master-node-3 0s

==> v1/Pod(related) NAME AGE redis-master-node-0 0s

==> v1/Service NAME AGE redis-master 0s redis-master-headless 0s

==> v1/StatefulSet NAME AGE redis-master-node 0s

NOTES: Please be patient while the chart is being deployed Redis can be accessed via port 6379 on the following DNS name from within your cluster:

redis-master.default.svc.cluster.local for read only operations

For read/write operations, first access the Redis Sentinel cluster, which is available in port 26379 using the same domain name above.

To connect to your Redis server:

  1. Run a Redis pod that you can use as a client: kubectl run --namespace default redis-master-client --rm --tty -i --restart='Never' \

    --image docker.io/bitnami/redis:6.0.8-debian-10-r0 -- bash

  2. Connect using the Redis CLI: redis-cli -h redis-master -p 6379 # Read only operations redis-cli -h redis-master -p 26379 # Sentinel access

To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace default svc/redis-master-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379
etavene commented 3 years ago

@javsalgar

we created chart using image built from redis 6.0.8 code base, and ran couple of performance comparision tests and observed that bitnami charts gave less performance.

./redis-benchmark -h 10.233.127.60 -t hset -r 100000 -n 1000000

The above command provided 64968.81 requests per second on a stand-alone redis server with the helm chart built using redis-6.0.8 code base, where as, bitnami charts you provided, gave us around 62262.62 requests per second. Is there any reason for this latency differences? As both DB config's are same, what could be the reason for this behaviour ?

snippet of our dockerfile

############################################################### FROM debian:buster-slim ENV REDIS_VERSION 6.0.8 ENV REDIS_DOWNLOAD_URL http://download.redis.io/releases/redis-6.0.8.tar.gz ###################################################################

Bitnami Templates Results: ====== HSET ====== 1000000 requests completed in 16.06 seconds (62262.62 requests per second) 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": host configuration "appendonly": yes multi-thread: no

: Our template using redis code base

====== HSET ====== 1000000 requests completed in 15.39 seconds (64968.81 requests per second) 50 parallel clients 3 bytes payload keep alive: 1 host configuration "save": host configuration "appendonly": yes multi-thread: no

Bitnami Templates DB config ( defaults are not changed)
1) "rdbchecksum" 2) "yes" 3) "daemonize" 4) "no" 5) "io-threads-do-reads" 6) "no" 7) "lua-replicate-commands" 8) "yes" 9) "always-show-logo" 10) "no" 11) "protected-mode" 12) "no" 13) "rdbcompression" 14) "yes" 15) "rdb-del-sync-files" 16) "no" 17) "activerehashing" 18) "yes" 19) "stop-writes-on-bgsave-error" 20) "yes" 21) "dynamic-hz" 22) "yes" 23) "lazyfree-lazy-eviction" 24) "no" 25) "lazyfree-lazy-expire" 26) "no" 27) "lazyfree-lazy-server-del" 28) "no" 29) "lazyfree-lazy-user-del" 30) "no" 31) "repl-disable-tcp-nodelay" 32) "no" 33) "repl-diskless-sync" 34) "no" 35) "gopher-enabled" 36) "no" 37) "aof-rewrite-incremental-fsync" 38) "yes" 39) "no-appendfsync-on-rewrite" 40) "no" 41) "cluster-require-full-coverage" 42) "yes" 43) "rdb-save-incremental-fsync" 44) "yes" 45) "aof-load-truncated" 46) "yes" 47) "aof-use-rdb-preamble" 48) "yes" 49) "cluster-replica-no-failover" 50) "no" 51) "cluster-slave-no-failover" 52) "no" 53) "replica-lazy-flush" 54) "no" 55) "slave-lazy-flush" 56) "no" 57) "replica-serve-stale-data" 58) "yes" 59) "slave-serve-stale-data" 60) "yes" 61) "replica-read-only" 62) "yes" 63) "slave-read-only" 64) "yes" 65) "replica-ignore-maxmemory" 66) "yes" 67) "slave-ignore-maxmemory" 68) "yes" 69) "jemalloc-bg-thread" 70) "yes" 71) "activedefrag" 72) "no" 73) "syslog-enabled" 74) "no" 75) "cluster-enabled" 76) "no" 77) "appendonly" 78) "yes" 79) "cluster-allow-reads-when-down" 80) "no" 81) "oom-score-adj" 82) "no" 83) "aclfile" 84) "" 85) "unixsocket" 86) "" 87) "pidfile" 88) "" 89) "replica-announce-ip" 90) "" 91) "slave-announce-ip" 92) "" 93) "masteruser" 94) "" 95) "masterauth" 96) "" 97) "cluster-announce-ip" 98) "" 99) "syslog-ident" 100) "redis" 101) "dbfilename" 102) "dump.rdb" 103) "appendfilename" 104) "appendonly.aof" 105) "server_cpulist" 106) "" 107) "bio_cpulist" 108) "" 109) "aof_rewrite_cpulist" 110) "" 111) "bgsave_cpulist" 112) "" 113) "supervised" 114) "no" 115) "syslog-facility" 116) "local0" 117) "repl-diskless-load" 118) "disabled" 119) "loglevel" 120) "notice" 121) "maxmemory-policy" 122) "noeviction" 123) "appendfsync" 124) "everysec" 125) "databases" 126) "16" 127) "port" 128) "6379" 129) "io-threads" 130) "1" 131) "auto-aof-rewrite-percentage" 132) "100" 133) "cluster-replica-validity-factor" 134) "10" 135) "cluster-slave-validity-factor" 136) "10" 137) "list-max-ziplist-size" 138) "-2" 139) "tcp-keepalive" 140) "300" 141) "cluster-migration-barrier" 142) "1" 143) "active-defrag-cycle-min" 144) "1" 145) "active-defrag-cycle-max" 146) "25" 147) "active-defrag-threshold-lower" 148) "10" 149) "active-defrag-threshold-upper" 150) "100" 151) "lfu-log-factor" 152) "10" 153) "lfu-decay-time" 154) "1" 155) "replica-priority" 156) "100" 157) "slave-priority" 158) "100" 159) "repl-diskless-sync-delay" 160) "5" 161) "maxmemory-samples" 162) "5" 163) "timeout" 164) "0" 165) "replica-announce-port" 166) "0" 167) "slave-announce-port" 168) "0" 169) "tcp-backlog" 170) "511" 171) "cluster-announce-bus-port" 172) "0" 173) "cluster-announce-port" 174) "0" 175) "repl-timeout" 176) "60" 177) "repl-ping-replica-period" 178) "10" 179) "repl-ping-slave-period" 180) "10" 181) "list-compress-depth" 182) "0" 183) "rdb-key-save-delay" 184) "0" 185) "key-load-delay" 186) "0" 187) "active-expire-effort" 188) "1" 189) "hz" 190) "10" 191) "min-replicas-to-write" 192) "0" 193) "min-slaves-to-write" 194) "0" 195) "min-replicas-max-lag" 196) "10" 197) "min-slaves-max-lag" 198) "10" 199) "maxclients" 200) "10000" 201) "active-defrag-max-scan-fields" 202) "1000" 203) "slowlog-max-len" 204) "128" 205) "acllog-max-len" 206) "128" 207) "lua-time-limit" 208) "5000" 209) "cluster-node-timeout" 210) "15000" 211) "slowlog-log-slower-than" 212) "10000" 213) "latency-monitor-threshold" 214) "0" 215) "proto-max-bulk-len" 216) "536870912" 217) "stream-node-max-entries" 218) "100" 219) "repl-backlog-size" 220) "1048576" 221) "maxmemory" 222) "0" 223) "hash-max-ziplist-entries" 224) "512" 225) "set-max-intset-entries" 226) "512" 227) "zset-max-ziplist-entries" 228) "128" 229) "active-defrag-ignore-bytes" 230) "104857600" 231) "hash-max-ziplist-value" 232) "64" 233) "stream-node-max-bytes" 234) "4096" 235) "zset-max-ziplist-value" 236) "64" 237) "hll-sparse-max-bytes" 238) "3000" 239) "tracking-table-max-keys" 240) "1000000" 241) "repl-backlog-ttl" 242) "3600" 243) "auto-aof-rewrite-min-size" 244) "67108864" 245) "tls-port" 246) "0" 247) "tls-session-cache-size" 248) "20480" 249) "tls-session-cache-timeout" 250) "300" 251) "tls-cluster" 252) "no" 253) "tls-replication" 254) "no" 255) "tls-auth-clients" 256) "yes" 257) "tls-prefer-server-ciphers" 258) "no" 259) "tls-session-caching" 260) "yes" 261) "tls-cert-file" 262) "" 263) "tls-key-file" 264) "" 265) "tls-dh-params-file" 266) "" 267) "tls-ca-cert-file" 268) "" 269) "tls-ca-cert-dir" 270) "" 271) "tls-protocols" 272) "" 273) "tls-ciphers" 274) "" 275) "tls-ciphersuites" 276) "" 277) "logfile" 278) "" 279) "client-query-buffer-limit" 280) "1073741824" 281) "watchdog-period" 282) "0" 283) "dir" 284) "/data" 285) "save" 286) "" 287) "client-output-buffer-limit" 288) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60" 289) "unixsocketperm" 290) "0" 291) "slaveof" 292) "" 293) "notify-keyspace-events" 294) "" 295) "bind" 296) "" 297) "requirepass" 298) "" 299) "oom-score-adj-values" 300) "0 200 800"

Our Templates DB config:

DB config: 1) "rdbchecksum" 2) "yes" 3) "daemonize" 4) "no" 5) "io-threads-do-reads" 6) "no" 7) "lua-replicate-commands" 8) "yes" 9) "always-show-logo" 10) "yes" 11) "protected-mode" 12) "no" 13) "rdbcompression" 14) "yes" 15) "rdb-del-sync-files" 16) "no" 17) "activerehashing" 18) "yes" 19) "stop-writes-on-bgsave-error" 20) "yes" 21) "dynamic-hz" 22) "yes" 23) "lazyfree-lazy-eviction" 24) "no" 25) "lazyfree-lazy-expire" 26) "no" 27) "lazyfree-lazy-server-del" 28) "no" 29) "lazyfree-lazy-user-del" 30) "no" 31) "repl-disable-tcp-nodelay" 32) "no" 33) "repl-diskless-sync" 34) "no" 35) "gopher-enabled" 36) "no" 37) "aof-rewrite-incremental-fsync" 38) "yes" 39) "no-appendfsync-on-rewrite" 40) "no" 41) "cluster-require-full-coverage" 42) "yes" 43) "rdb-save-incremental-fsync" 44) "yes" 45) "aof-load-truncated" 46) "yes" 47) "aof-use-rdb-preamble" 48) "yes" 49) "cluster-replica-no-failover" 50) "no" 51) "cluster-slave-no-failover" 52) "no" 53) "replica-lazy-flush" 54) "no" 55) "slave-lazy-flush" 56) "no" 57) "replica-serve-stale-data" 58) "yes" 59) "slave-serve-stale-data" 60) "yes" 61) "replica-read-only" 62) "yes" 63) "slave-read-only" 64) "yes" 65) "replica-ignore-maxmemory" 66) "yes" 67) "slave-ignore-maxmemory" 68) "yes" 69) "jemalloc-bg-thread" 70) "yes" 71) "activedefrag" 72) "no" 73) "syslog-enabled" 74) "no" 75) "cluster-enabled" 76) "no" 77) "appendonly" 78) "yes" 79) "cluster-allow-reads-when-down" 80) "no" 81) "oom-score-adj" 82) "no" 83) "aclfile" 84) "" 85) "unixsocket" 86) "" 87) "pidfile" 88) "/var/run/redis_6379.pid" 89) "replica-announce-ip" 90) "" 91) "slave-announce-ip" 92) "" 93) "masteruser" 94) "" 95) "masterauth" 96) "" 97) "cluster-announce-ip" 98) "" 99) "syslog-ident" 100) "redis" 101) "dbfilename" 102) "dump.rdb" 103) "appendfilename" 104) "appendonly.aof" 105) "server_cpulist" 106) "" 107) "bio_cpulist" 108) "" 109) "aof_rewrite_cpulist" 110) "" 111) "bgsave_cpulist" 112) "" 113) "supervised" 114) "no" 115) "syslog-facility" 116) "local0" 117) "repl-diskless-load" 118) "disabled" 119) "loglevel" 120) "notice" 121) "maxmemory-policy" 122) "noeviction" 123) "appendfsync" 124) "everysec" 125) "databases" 126) "16" 127) "port" 128) "6379" 129) "io-threads" 130) "1" 131) "auto-aof-rewrite-percentage" 132) "100" 133) "cluster-replica-validity-factor" 134) "10" 135) "cluster-slave-validity-factor" 136) "10" 137) "list-max-ziplist-size" 138) "-2" 139) "tcp-keepalive" 140) "300" 141) "cluster-migration-barrier" 142) "1" 143) "active-defrag-cycle-min" 144) "1" 145) "active-defrag-cycle-max" 146) "25" 147) "active-defrag-threshold-lower" 148) "10" 149) "active-defrag-threshold-upper" 150) "100" 151) "lfu-log-factor" 152) "10" 153) "lfu-decay-time" 154) "1" 155) "replica-priority" 156) "100" 157) "slave-priority" 158) "100" 159) "repl-diskless-sync-delay" 160) "5" 161) "maxmemory-samples" 162) "5" 163) "timeout" 164) "0" 165) "replica-announce-port" 166) "0" 167) "slave-announce-port" 168) "0" 169) "tcp-backlog" 170) "511" 171) "cluster-announce-bus-port" 172) "0" 173) "cluster-announce-port" 174) "0" 175) "repl-timeout" 176) "60" 177) "repl-ping-replica-period" 178) "10" 179) "repl-ping-slave-period" 180) "10" 181) "list-compress-depth" 182) "0" 183) "rdb-key-save-delay" 184) "0" 185) "key-load-delay" 186) "0" 187) "active-expire-effort" 188) "1" 189) "hz" 190) "10" 191) "min-replicas-to-write" 192) "0" 193) "min-slaves-to-write" 194) "0" 195) "min-replicas-max-lag" 196) "10" 197) "min-slaves-max-lag" 198) "10" 199) "maxclients" 200) "10000" 201) "active-defrag-max-scan-fields" 202) "1000" 203) "slowlog-max-len" 204) "128" 205) "acllog-max-len" 206) "128" 207) "lua-time-limit" 208) "5000" 209) "cluster-node-timeout" 210) "15000" 211) "slowlog-log-slower-than" 212) "10000" 213) "latency-monitor-threshold" 214) "0" 215) "proto-max-bulk-len" 216) "536870912" 217) "stream-node-max-entries" 218) "100" 219) "repl-backlog-size" 220) "1048576" 221) "maxmemory" 222) "0" 223) "hash-max-ziplist-entries" 224) "512" 225) "set-max-intset-entries" 226) "512" 227) "zset-max-ziplist-entries" 228) "128" 229) "active-defrag-ignore-bytes" 230) "104857600" 231) "hash-max-ziplist-value" 232) "64" 233) "stream-node-max-bytes" 234) "4096" 235) "zset-max-ziplist-value" 236) "64" 237) "hll-sparse-max-bytes" 238) "3000" 239) "tracking-table-max-keys" 240) "1000000" 241) "repl-backlog-ttl" 242) "3600" 243) "auto-aof-rewrite-min-size" 244) "67108864" 245) "tls-port" 246) "0" 247) "tls-session-cache-size" 248) "20480" 249) "tls-session-cache-timeout" 250) "300" 251) "tls-cluster" 252) "no" 253) "tls-replication" 254) "no" 255) "tls-auth-clients" 256) "yes" 257) "tls-prefer-server-ciphers" 258) "no" 259) "tls-session-caching" 260) "yes" 261) "tls-cert-file" 262) "" 263) "tls-key-file" 264) "" 265) "tls-dh-params-file" 266) "" 267) "tls-ca-cert-file" 268) "" 269) "tls-ca-cert-dir" 270) "" 271) "tls-protocols" 272) "" 273) "tls-ciphers" 274) "" 275) "tls-ciphersuites" 276) "" 277) "logfile" 278) "" 279) "client-query-buffer-limit" 280) "1073741824" 281) "watchdog-period" 282) "0" 283) "dir" 284) "/data" 285) "save" 286) "" 287) "client-output-buffer-limit" 288) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60" 289) "unixsocketperm" 290) "0" 291) "slaveof" 292) "" 293) "notify-keyspace-events" 294) "" 295) "bind" 296) "" 297) "requirepass" 298) "" 299) "oom-score-adj-values" 300) "0 200 800"

Bitnami charts redis server info redis_version:6.0.8 redis_build_id:e46fe36ddd420afe redis_mode:standalone os:Linux 4.4.0-131-generic x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:8.3.0 executable:/redis-server

our template redis server info: redis_version:6.0.8 redis_build_id:bea1c0ecc8184f2f redis_mode:standalone os:Linux 4.4.0-131-generic x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:8.3.0 config_file:/etc/redis/redis.conf

rafariossaa commented 3 years ago

Hi @etavene , This thread is closed and your question is not related to this issue. Do you mind opening a new one so it can be properly handled ?

Thanks forehand.