DandyDeveloper / charts

Various helm charts migrated from [helm/stable] due to deprecation
https://dandydeveloper.github.io/charts
Apache License 2.0
157 stars 145 forks source link

[chart/redis-ha][REQUEST] Support ipv6 #98

Closed KimMJ closed 2 years ago

KimMJ commented 3 years ago

Is your feature request related to a problem? Please describe. I have ipv6 Kubernetes cluster and tried to install redis with redis-ha helm chart. It was failed and here is issues:

Describe the solution you'd like

It seems enough to configure ipv6 in value file. In the future, consideration for ipv4/ipv6 dual stack is also required.

DandyDeveloper commented 3 years ago

@KimMJ Is it that the images do not support IPv6 or that there's a specific setting not supported in Redis?

Can you provide the exact errors? Even with a cluster using ipv6, does a ipv4 CIDR not get used by pods to register addresses?

For example, if you do kubectl get pods -o wide are we saying an Ipv6 address is all that is visible?

KimMJ commented 3 years ago

@DandyDeveloper

My k8s cluster is "ipv6 only " cluster so ipv4 CIDR not get used by pods to register addresses. I checked again, "redis:6.0.7-apline image doesn't bind ipv6" was wrong information. Let's focus on haproxy settings.

kubernetes cluster

$ kubectl get pods -o wide -n redis-debug
NAME                                           READY   STATUS             RESTARTS   AGE   IP                               NODE           NOMINATED NODE   READINESS GATES
redis-debug-redis-ha-haproxy-cf89b49ff-rhw82   0/1     CrashLoopBackOff   9          13m   dead:beef::8e22:765f:6121:eb85   controller-0   <none>           <none>
redis-debug-redis-ha-haproxy-cf89b49ff-zrn6x   0/1     CrashLoopBackOff   10         13m   dead:beef::8e22:765f:6121:eba4   controller-0   <none>           <none>
redis-debug-redis-ha-server-0                  2/2     Running            0          13m   dead:beef::8e22:765f:6121:ebb6   controller-0   <none>           <none>
redis-debug-redis-ha-server-1                  2/2     Running            0          12m   dead:beef::8e22:765f:6121:ebaa   controller-0   <none>           <none>
redis-debug-redis-ha-server-2                  2/2     Running            0          12m   dead:beef::8e22:765f:6121:eb97   controller-0   <none>           <none>

redis-server

attach to redis-server

$ kubectl exec -it -n redis-debug redis-debug-redis-ha-server-0 -- sh

use netstat to binding ports

/data $ netstat -anlp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:26379           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      1/redis-server
tcp        0      0 :::26379                :::*                    LISTEN      -
tcp        0      0 :::6379                 :::*                    LISTEN      1/redis-server
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:6379 face::4:65214           ESTABLISHED 1/redis-server
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:44162 fd04::70da:6379         ESTABLISHED -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:26379 dead:beef::8e22:765f:6121:eb85:57624 TIME_WAIT   -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:44164 fd04::70da:6379         ESTABLISHED -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:6379 dead:beef::8e22:765f:6121:eb85:45710 ESTABLISHED 1/redis-server
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:6379 dead:beef::8e22:765f:6121:eba4:43698 TIME_WAIT   -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:26379 dead:beef::8e22:765f:6121:eb85:39988 ESTABLISHED -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:26379 dead:beef::8e22:765f:6121:eba4:42476 TIME_WAIT   -
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:6379 face::4:28386           ESTABLISHED 1/redis-server
tcp        0      0 dead:beef::8e22:765f:6121:ebb6:26379 dead:beef::8e22:765f:6121:eb85:40002 ESTABLISHED -
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node PID/Program name    Path
/data $ ifconfig
eth0      Link encap:Ethernet  HWaddr 86:67:0F:63:A1:BD
          inet6 addr: fe80::8467:fff:fe63:a1bd/64 Scope:Link
          inet6 addr: dead:beef::8e22:765f:6121:ebb6/128 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5130 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4377 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648643 (633.4 KiB)  TX bytes:623068 (608.4 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:110 errors:0 dropped:0 overruns:0 frame:0
          TX packets:110 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6127 (5.9 KiB)  TX bytes:6127 (5.9 KiB)

check route table

ipv4

/data $ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

ipv6

/data $ route -n -A inet6
Kernel IPv6 routing table
Destination                                 Next Hop                                Flags Metric Ref    Use Iface
dead:beef::8e22:765f:6121:ebb6/128          ::                                      U     256    0        0 eth0
fe80::/64                                   ::                                      U     256    0        0 eth0
::/0                                        fe80::ecee:eeff:feee:eeee               UG    1024   36    1176 eth0
::/0                                        ::                                      !n    -1     1     1177 lo
::1/128                                     ::                                      Un    0      1        0 lo
dead:beef::8e22:765f:6121:ebb6/128          ::                                      Un    0      50    1181 lo
fe80::8467:fff:fe63:a1bd/128                ::                                      Un    0      3        2 lo
ff00::/8                                    ::                                      U     256    2        9 eth0
::/0                                        ::                                      !n    -1     1     1177 lo

helm value

replicas: 3

haproxy:
  enabled: true
  # Enable if you want a dedicated port in haproxy for redis-slaves
  readOnly:
    enabled: true                                                      
    port: 6380
  replicas: 2
  hardAntiAffinity: false                                               
## Redis specific configuration options
redis:
  port: 6379
  resources:
    requests:
      memory: 512Mi
      cpu: 100m
    limits:
      memory: 1Gi

## Sentinel specific configuration options
sentinel:
  port: 26379
  quorum: 2
  config:
    down-after-milliseconds: 1000
    failover-timeout: 180000
    parallel-syncs: 5
    maxclients: 10000
  resources:
    requests:
      memory: 200Mi
      cpu: 100m
    limits:
      memory: 200Mi

hardAntiAffinity: false

persistentVolume:
  enabled: true
  storageClass: "general"
  accessModes:
    - ReadWriteOnce                                                                            
  size: 1Gi
  annotations: {}
  reclaimPolicy: ""
init:
  resources: {}

redis-ha-haproxy

logs

$ kubectl logs -n redis-debug redis-debug-redis-ha-haproxy-cf89b49ff-rhw82
[NOTICE] 016/114636 (1) : New worker #1 (6) forked
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_1/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::55e2')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_1/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::55e2')", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_1/R2 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::55e2')", check duration: 1000ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 016/114638 (6) : backend 'check_if_redis_is_master_1' has no server available!
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_2/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::88d8')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_2/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::88d8')", check duration: 1000ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server check_if_redis_is_master_2/R2 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'fd04::88d8')", check duration: 1000ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 016/114638 (6) : backend 'check_if_redis_is_master_2' has no server available!
[WARNING] 016/114638 (6) : Server bk_redis_master/R1 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'role:master')", check duration: 1001ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server bk_redis_master/R2 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'role:master')", check duration: 1001ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114638 (6) : Server bk_redis_slave/R0 is DOWN, reason: Layer7 timeout, info: " at step 5 of tcp-check (expect string 'role:slave')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] 016/114647 (1) : Exiting Master process...
[WARNING] 016/114647 (6) : Stopping proxy health_check_http_url in 0 ms.
[WARNING] 016/114647 (6) : Stopping backend check_if_redis_is_master_0 in 0 ms.
[WARNING] 016/114647 (6) : Stopping backend check_if_redis_is_master_1 in 0 ms.
[WARNING] 016/114647 (6) : Stopping backend check_if_redis_is_master_2 in 0 ms.
[WARNING] 016/114647 (6) : Stopping frontend ft_redis_master in 0 ms.
[WARNING] 016/114647 (6) : Stopping frontend ft_redis_slave in 0 ms.
[WARNING] 016/114647 (6) : Stopping backend bk_redis_master in 0 ms.
[WARNING] 016/114647 (6) : Stopping backend bk_redis_slave in 0 ms.
[WARNING] 016/114647 (6) : Stopping frontend GLOBAL in 0 ms.
[WARNING] 016/114647 (6) : Proxy health_check_http_url stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy check_if_redis_is_master_0 stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy check_if_redis_is_master_1 stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy check_if_redis_is_master_2 stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy ft_redis_master stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy ft_redis_slave stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy bk_redis_master stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy bk_redis_slave stopped (FE: 0 conns, BE: 0 conns).
[WARNING] 016/114647 (6) : Proxy GLOBAL stopped (FE: 0 conns, BE: 0 conns).
[ALERT] 016/114648 (1) : Current worker #1 (6) exited with code 0 (Exit)
[WARNING] 016/114648 (1) : All workers exited. Exiting... (0)

So, I change config file as I mentioned above

in templates/_configs.tpl file, change all binding like bind :8888 to bind ipv6@:8888

I think it is occurred because haproxy container cannot connect to redis-ha-server container via ipv4

archoversight commented 2 years ago

I've fixed this by supporting both IPv6 and IPv4 in the same configuration file, see #186 for my proposed solution.