bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.98k stars 9.2k forks source link

[bitnami/redis] Sentinel is not able to promote a replica to a new master #26114

Closed AbhilashKopalli closed 4 months ago

AbhilashKopalli commented 5 months ago

Name and Version

bitnami/redis

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. Installed the helm chart in sentinel mode.
  2. Having one master and 2 replicas as statefulsets. Having the sentinel container inside the Redis pod.
  3. When I insert the keys in the master, I can see that all the replicas have those keys.
  4. When I delete the pod, by issuing the kubectl delete command, the new master does not have the keys which I have inserted before.
  5. The persistence is not set, it is set to off and has been disabled.

Are you using any custom parameters or values?

I have set the sentinel as enabled, and persistence as off. And also, enabled TLS for it.

SENTINEL CONF from the master instance.
127.0.0.1:26379> SENTINEL MASTER mymaster
 1) "name"
 2) "mymaster"
 3) "ip"
 4) "redis-node-0.redis-node-headless.ml-algoritm-pull.svc.cluster.local"
 5) "port"
 6) "6379"
 7) "runid"
 8) "2b1ef17e0a0a8bc5daf9979efc4166e152b3f8b1"
 9) "flags"
10) "master"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "895"
19) "last-ping-reply"
20) "895"
21) "down-after-milliseconds"
22) "60000"
23) "info-refresh"
24) "8807"
25) "role-reported"
26) "master"
27) "role-reported-time"
28) "6352436"
29) "config-epoch"
30) "0"
31) "num-slaves"
32) "2"
33) "num-other-sentinels"
34) "2"
35) "quorum"
36) "2"
37) "failover-timeout"
38) "180000"
39) "parallel-syncs"
40) "1"

What is the expected behavior?

The new master, which is spun up should have all the keys which were inserted before as the new replica should be the master with all the existing keys.

What do you see instead?

The keys are empty in the new master instance.

Additional information

Is this the expected behavior or am I missing anything?

carrodher commented 5 months ago

What are the values used to deploy the chart?

AbhilashKopalli commented 5 months ago

Hi @carrodher,

We were able to resolve this issue, the issue was with the TLS port incorrect configuration due to which the prehook step for the redis and sentinel were not working.

However, can you please answer the below questions/queries that we have:

1) Say suppose, we are performing a helm upgrade and we want this to happen seamlessly without affecting/loosing the existing keys. 2) Is there any scenario that should be taken into consideration when we might loose the keys during the failover or during any situation?

Currently, when I tested with multiple keys and issued the kubectl delete command, one of the replica is getting promoted to the master and the keys are intact. Is there any situation I missed or should be cautious about?

carrodher commented 5 months ago

The issue may not be directly related to the Bitnami container image or Helm chart, but rather to how the application is being utilized or configured in your specific environment.

If you have any questions about the application itself, customizing its content, or questions about technology and infrastructure usage, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.

AbhilashKopalli commented 5 months ago

Thanks a lot @carrodher. Really appreciate your timely help.

github-actions[bot] commented 4 months ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 4 months ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.