Closed aleclerc-sonrai closed 2 years ago
@aleclerc-sonrai Do you have your separated values you can provide?
This looks to me like the cluster names are the same in your values, and as a result the Redis side is actually finding them.
Can you also do a kubectl get endpoints
and check if all the pods are being picked up by the redisX-redis
services?
I'm having this same problem.
This looks to me like the cluster names are the same in your values
By "cluster name" do you mean master set? Yes, both redis installations use the default master set "mymaster". But why should that matter? The IPs would be different for each installation, so how are they finding each other?
kubectl get endpoints
and check if all the pods are being picked up
Yes, it looks like somehow pods from both installations are being added to both endpoints.
Aha, I see, it's because the selector
for the main services
(or more specifically, the "app" label) isn't reacting to fullnameOverride
, which is the value I changed to differ the installations. 🤔
@DandyDeveloper thoughts on my PR?
@jimethn Correct. It's because of the selector & headless service. I'll review your PR in a bit.
Fixed in #197
Describe the bug I've hit this several times, and I'm not quite sure if it's a network issue, or something in code. I have many redis-ha installs in my k8s cluster (50+) and on the odd occasion, one of the redis-sentinels from a separate cluster takes 'ownership' of one of the slaves in the other cluster, causing both sentinels to fight between them.
Cluster 1 (whose slave is fighting)
Cluster 2 Sentinel Config
Note the
10.100.123.170
IP (from Cluster 1) in the known replicas.Then in the logs of redis1 that is trying to re-start/sync
These two line in particular are the sentinels fighting between the redis pods.