apecloud / kubeblocks

KubeBlocks is an open-source control plane software that runs and manages databases, message queues and other stateful applications on K8s.
https://kubeblocks.io
GNU Affero General Public License v3.0
1.76k stars 156 forks source link

[BUG] redis cluster scale shards 3 --> 5 provision pod error #6966

Open JashBook opened 3 months ago

JashBook commented 3 months ago

Describe the bug A clear and concise description of what the bug is.

To Reproduce Steps to reproduce the behavior:

  1. create cluster
    apiVersion: apps.kubeblocks.io/v1alpha1
    kind: Cluster
    metadata:
    name: redisc-pupldo
    namespace: default
    spec:
    terminationPolicy: Delete
    shardingSpecs:
    - name: shard
      shards: 3
      template:
        name: redis
        componentDef: redis-cluster
        replicas: 2
        switchPolicy:
          type: Noop
        resources:
          limits:
            cpu: 100m
            memory: 0.5Gi
          requests:
            cpu: 100m
            memory: 0.5Gi
        volumeClaimTemplates:
          - name: data
            spec:
              accessModes:
                - ReadWriteOnce
              resources:
                 requests:
                  storage: 1Gi
  2. scale shards 5 --> 3
    kubectl edit cluster redisc-pupldo
  3. See error
    kubectl get pod 
    NAME                                                  READY   STATUS    RESTARTS   AGE
    kb-post-provision-job-redisc-pupldo-shard-5n9-j6nzq   1/1     Running   0          9s
    kb-post-provision-job-redisc-pupldo-shard-mxw-q7cdh   0/1     Error     0          11s
    kb-post-provision-job-redisc-pupldo-shard-mxw-w99ck   1/1     Running   0          2s
    redisc-pupldo-shard-5n9-0                             3/3     Running   0          41s
    redisc-pupldo-shard-5n9-1                             3/3     Running   0          41s
    redisc-pupldo-shard-6v7-0                             3/3     Running   0          9m31s
    redisc-pupldo-shard-6v7-1                             3/3     Running   0          9m31s
    redisc-pupldo-shard-bjt-0                             3/3     Running   0          9m31s
    redisc-pupldo-shard-bjt-1                             3/3     Running   0          9m32s
    redisc-pupldo-shard-l8g-0                             3/3     Running   0          9m31s
    redisc-pupldo-shard-l8g-1                             3/3     Running   0          9m31s
    redisc-pupldo-shard-mxw-0                             3/3     Running   0          39s
    redisc-pupldo-shard-mxw-1                             3/3     Running   0          39s

logs error pod

kubectl logs kb-post-provision-job-redisc-pupldo-shard-mxw-q7cdh 
+ '[' 1 -eq 1 ']'
+ case $1 in
+ initialize_or_scale_out_redis_cluster
+ wait_random_second 10 1
+ local max_time=10
+ local min_time=1
+ local random_time=4
+ echo 'Sleeping for 4 seconds'
+ sleep 4
Sleeping for 4 seconds
+ is_redis_cluster_initialized
+ '[' -z 10.244.14.173,10.244.14.177,10.244.14.175,10.244.14.176,10.244.14.186,10.244.14.187,10.244.14.188,10.244.14.189,10.244.14.174,10.244.14.178 ']'
+ local initialized=false
++ tr , ' '
++ echo 10.244.14.173,10.244.14.177,10.244.14.175,10.244.14.176,10.244.14.186,10.244.14.187,10.244.14.188,10.244.14.189,10.244.14.174,10.244.14.178
+ for pod_ip in $(echo "$KB_CLUSTER_POD_IP_LIST" | tr ',' ' ')
++ redis-cli -h 10.244.14.173 -a O3605v7HsS cluster info
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
+ cluster_info='cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:2
cluster_stats_messages_ping_sent:917
cluster_stats_messages_pong_sent:926
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:1844
cluster_stats_messages_ping_received:926
cluster_stats_messages_pong_received:918
cluster_stats_messages_received:1844
'otal_cluster_links_buffer_limit_exceeded:0
+ echo 'cluster_info cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:2
cluster_stats_messages_ping_sent:917
cluster_stats_messages_pong_sent:926
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:1844
cluster_stats_messages_ping_received:926
cluster_stats_messages_pong_received:918
cluster_stats_messages_received:1844
'otal_cluster_links_buffer_limit_exceeded:0
cluster_info cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:2
cluster_stats_messages_ping_sent:917
cluster_stats_messages_pong_sent:926
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:1844
cluster_stats_messages_ping_received:926
cluster_stats_messages_pong_received:918
cluster_stats_messages_received:1844
total_cluster_links_buffer_limit_exceeded:0
++ grep -oP '(?<=cluster_state:)[^\s]+'
++ echo 'cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:2
cluster_stats_messages_ping_sent:917
cluster_stats_messages_pong_sent:926
cluster_stats_messages_meet_sent:1
cluster_stats_messages_sent:1844
cluster_stats_messages_ping_received:926
cluster_stats_messages_pong_received:918
cluster_stats_messages_received:1844
'otal_cluster_links_buffer_limit_exceeded:0
+ cluster_state=ok
+ '[' -z ok ']'
+ '[' ok == ok ']'
+ echo 'Redis Cluster already initialized'
+ initialized=true
+ break
+ '[' true = true ']'
+ echo 'Redis Cluster already initialized, scaling out the shard...'
+ scale_out_redis_cluster_shard
+ init_other_components_and_pods_info shard-mxw 10.244.14.173,10.244.14.177,10.244.14.175,10.244.14.176,10.244.14.186,10.244.14.187,10.244.14.188,10.244.14.189,10.244.14.174,10.244.14.178 redisc-pupldo-shard-bjt-1,redisc-pupldo-shard-bjt-0,redisc-pupldo-shard-6v7-1,redisc-pupldo-shard-6v7-0,redisc-pupldo-shard-5n9-0,redisc-pupldo-shard-5n9-1,redisc-pupldo-shard-mxw-1,redisc-pupldo-shard-mxw-0,redisc-pupldo-shard-l8g-1,redisc-pupldo-shard-l8g-0 shard-bjt,shard-6v7,shard-5n9,shard-mxw,shard-l8g '' ''
+ local component=shard-mxw
+ local all_pod_ip_list=10.244.14.173,10.244.14.177,10.244.14.175,10.244.14.176,10.244.14.186,10.244.14.187,10.244.14.188,10.244.14.189,10.244.14.174,10.244.14.178
+ local all_pod_name_list=redisc-pupldo-shard-bjt-1,redisc-pupldo-shard-bjt-0,redisc-pupldo-shard-6v7-1,redisc-pupldo-shard-6v7-0,redisc-pupldo-shard-5n9-0,redisc-pupldo-shard-5n9-1,redisc-pupldo-shard-mxw-1,redisc-pupldo-shard-mxw-0,redisc-pupldo-shard-l8g-1,redisc-pupldo-shard-l8g-0
+ local all_component_list=shard-bjt,shard-6v7,shard-5n9,shard-mxw,shard-l8g
+ local all_deleting_component_list=
+ local all_undeleted_component_list=
+ other_components=()
+ other_deleting_components=()
+ other_undeleted_components=()
+ other_undeleted_component_pod_ips=()
+ other_undeleted_component_pod_names=()
+ other_undeleted_component_nodes=()
+ echo 'init other components and pods info, current component: shard-mxw'
+ IFS=,
+ read -ra components
Redis Cluster already initialized
Redis Cluster already initialized, scaling out the shard...
init other components and pods info, current component: shard-mxw
+ IFS=,
+ read -ra deleting_components
+ IFS=,
+ read -ra undeleted_components
+ for comp in "${components[@]}"
+ '[' shard-bjt = shard-mxw ']'
+ other_components+=("$comp")
+ for comp in "${components[@]}"
+ '[' shard-6v7 = shard-mxw ']'
+ other_components+=("$comp")
+ for comp in "${components[@]}"
+ '[' shard-5n9 = shard-mxw ']'
+ other_components+=("$comp")
+ for comp in "${components[@]}"
+ '[' shard-mxw = shard-mxw ']'
+ echo 'skip the component shard-mxw as it is the current component'
+ continue
+ for comp in "${components[@]}"
+ '[' shard-l8g = shard-mxw ']'
+ other_components+=("$comp")
+ IFS=,
+ read -ra pod_ips
skip the component shard-mxw as it is the current component
+ IFS=,
+ read -ra pod_names
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-bjt-1
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-bjt-1
++ local pod_name=redisc-pupldo-shard-bjt-1
+++ echo redisc-pupldo-shard-bjt-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-bjt
++ echo redisc-pupldo-shard-bjt
+ pod_name_prefix=redisc-pupldo-shard-bjt
+ echo
+ grep -q redisc-pupldo-shard-bjt
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-bjt-1
++ local pod_name=redisc-pupldo-shard-bjt-1
+++ echo redisc-pupldo-shard-bjt-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-bjt
++ echo redisc-pupldo-shard-bjt
+ pod_name_prefix=redisc-pupldo-shard-bjt
+ pod_fqdn=redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-bjt-0
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-bjt-0
++ local pod_name=redisc-pupldo-shard-bjt-0
+++ echo redisc-pupldo-shard-bjt-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-bjt
++ echo redisc-pupldo-shard-bjt
+ pod_name_prefix=redisc-pupldo-shard-bjt
+ echo
+ grep -q redisc-pupldo-shard-bjt
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-bjt-0
++ local pod_name=redisc-pupldo-shard-bjt-0
+++ echo redisc-pupldo-shard-bjt-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-bjt
++ echo redisc-pupldo-shard-bjt
+ pod_name_prefix=redisc-pupldo-shard-bjt
+ pod_fqdn=redisc-pupldo-shard-bjt-0.redisc-pupldo-shard-bjt-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-6v7-1
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-6v7-1
++ local pod_name=redisc-pupldo-shard-6v7-1
+++ echo redisc-pupldo-shard-6v7-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-6v7
++ echo redisc-pupldo-shard-6v7
+ pod_name_prefix=redisc-pupldo-shard-6v7
+ echo
+ grep -q redisc-pupldo-shard-6v7
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-6v7-1
++ local pod_name=redisc-pupldo-shard-6v7-1
+++ echo redisc-pupldo-shard-6v7-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-6v7
++ echo redisc-pupldo-shard-6v7
+ pod_name_prefix=redisc-pupldo-shard-6v7
+ pod_fqdn=redisc-pupldo-shard-6v7-1.redisc-pupldo-shard-6v7-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-6v7-0
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-6v7-0
++ local pod_name=redisc-pupldo-shard-6v7-0
+++ echo redisc-pupldo-shard-6v7-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-6v7
++ echo redisc-pupldo-shard-6v7
+ pod_name_prefix=redisc-pupldo-shard-6v7
+ echo
+ grep -q redisc-pupldo-shard-6v7
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-6v7-0
++ local pod_name=redisc-pupldo-shard-6v7-0
+++ echo redisc-pupldo-shard-6v7-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-6v7
++ echo redisc-pupldo-shard-6v7
+ pod_name_prefix=redisc-pupldo-shard-6v7
+ pod_fqdn=redisc-pupldo-shard-6v7-0.redisc-pupldo-shard-6v7-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-5n9-0
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-5n9-0
++ local pod_name=redisc-pupldo-shard-5n9-0
+++ sed 's/-[0-9]\+$//'
+++ echo redisc-pupldo-shard-5n9-0
++ prefix=redisc-pupldo-shard-5n9
++ echo redisc-pupldo-shard-5n9
+ pod_name_prefix=redisc-pupldo-shard-5n9
+ echo
+ grep -q redisc-pupldo-shard-5n9
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-5n9-0
++ local pod_name=redisc-pupldo-shard-5n9-0
+++ echo redisc-pupldo-shard-5n9-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-5n9
++ echo redisc-pupldo-shard-5n9
+ pod_name_prefix=redisc-pupldo-shard-5n9
+ pod_fqdn=redisc-pupldo-shard-5n9-0.redisc-pupldo-shard-5n9-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-5n9-1
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-5n9-1
++ local pod_name=redisc-pupldo-shard-5n9-1
+++ echo redisc-pupldo-shard-5n9-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-5n9
++ echo redisc-pupldo-shard-5n9
+ pod_name_prefix=redisc-pupldo-shard-5n9
+ echo
+ grep -q redisc-pupldo-shard-5n9
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-5n9-1
++ local pod_name=redisc-pupldo-shard-5n9-1
+++ echo redisc-pupldo-shard-5n9-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-5n9
++ echo redisc-pupldo-shard-5n9
+ pod_name_prefix=redisc-pupldo-shard-5n9
+ pod_fqdn=redisc-pupldo-shard-5n9-1.redisc-pupldo-shard-5n9-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-mxw-1
+ grep shard-mxw-
redisc-pupldo-shard-mxw-1
+ echo 'skip the pod redisc-pupldo-shard-mxw-1 as it belongs the component shard-mxw'
+ continue
+ for index in "${!pod_ips[@]}"
skip the pod redisc-pupldo-shard-mxw-1 as it belongs the component shard-mxw
+ echo redisc-pupldo-shard-mxw-0
+ grep shard-mxw-
redisc-pupldo-shard-mxw-0
+ echo 'skip the pod redisc-pupldo-shard-mxw-0 as it belongs the component shard-mxw'
+ continue
+ for index in "${!pod_ips[@]}"
skip the pod redisc-pupldo-shard-mxw-0 as it belongs the component shard-mxw
+ echo redisc-pupldo-shard-l8g-1
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-l8g-1
++ local pod_name=redisc-pupldo-shard-l8g-1
+++ echo redisc-pupldo-shard-l8g-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-l8g
++ echo redisc-pupldo-shard-l8g
+ pod_name_prefix=redisc-pupldo-shard-l8g
+ echo
+ grep -q redisc-pupldo-shard-l8g
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-l8g-1
++ local pod_name=redisc-pupldo-shard-l8g-1
+++ echo redisc-pupldo-shard-l8g-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-l8g
++ echo redisc-pupldo-shard-l8g
+ pod_name_prefix=redisc-pupldo-shard-l8g
+ pod_fqdn=redisc-pupldo-shard-l8g-1.redisc-pupldo-shard-l8g-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ for index in "${!pod_ips[@]}"
+ echo redisc-pupldo-shard-l8g-0
+ grep shard-mxw-
++ extract_pod_name_prefix redisc-pupldo-shard-l8g-0
++ local pod_name=redisc-pupldo-shard-l8g-0
+++ echo redisc-pupldo-shard-l8g-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-l8g
++ echo redisc-pupldo-shard-l8g
+ pod_name_prefix=redisc-pupldo-shard-l8g
+ echo
+ grep -q redisc-pupldo-shard-l8g
+ other_undeleted_component_pod_ips+=("${pod_ips[$index]}")
+ other_undeleted_component_pod_names+=("${pod_names[$index]}")
++ extract_pod_name_prefix redisc-pupldo-shard-l8g-0
++ local pod_name=redisc-pupldo-shard-l8g-0
+++ echo redisc-pupldo-shard-l8g-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-l8g
++ echo redisc-pupldo-shard-l8g
+ pod_name_prefix=redisc-pupldo-shard-l8g
+ pod_fqdn=redisc-pupldo-shard-l8g-0.redisc-pupldo-shard-l8g-headless
+ other_undeleted_component_nodes+=("$pod_fqdn:$SERVICE_PORT")
+ echo 'other_components: shard-bjt shard-6v7 shard-5n9 shard-l8g'
+ echo 'other_deleting_components: '
+ echo 'other_undeleted_components: '
+ echo 'other_undeleted_component_pod_ips: 10.244.14.173 10.244.14.177 10.244.14.175 10.244.14.176 10.244.14.186 10.244.14.187 10.244.14.174 10.244.14.178'
+ echo 'other_undeleted_component_pod_names: redisc-pupldo-shard-bjt-1 redisc-pupldo-shard-bjt-0 redisc-pupldo-shard-6v7-1 redisc-pupldo-shard-6v7-0 redisc-pupldo-shard-5n9-0 redisc-pupldo-shard-5n9-1 redisc-pupldo-shard-l8g-1 redisc-pupldo-shard-l8g-0'
+ echo 'other_undeleted_component_nodes: redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 redisc-pupldo-shard-bjt-0.redisc-pupldo-shard-bjt-headless:6379 redisc-pupldo-shard-6v7-1.redisc-pupldo-shard-6v7-headless:6379 redisc-pupldo-shard-6v7-0.redisc-pupldo-shard-6v7-headless:6379 redisc-pupldo-shard-5n9-0.redisc-pupldo-shard-5n9-headless:6379 redisc-pupldo-shard-5n9-1.redisc-pupldo-shard-5n9-headless:6379 redisc-pupldo-shard-l8g-1.redisc-pupldo-shard-l8g-headless:6379 redisc-pupldo-shard-l8g-0.redisc-pupldo-shard-l8g-headless:6379'
+ init_current_comp_default_nodes_for_scale_out
+ '[' -z redisc-pupldo-shard-mxw-0,redisc-pupldo-shard-mxw-1 ']'
+ current_comp_default_primary_node=()
+ current_comp_default_other_nodes=()
+ local port=6379
other_components: shard-bjt shard-6v7 shard-5n9 shard-l8g
other_deleting_components: 
other_undeleted_components: 
other_undeleted_component_pod_ips: 10.244.14.173 10.244.14.177 10.244.14.175 10.244.14.176 10.244.14.186 10.244.14.187 10.244.14.174 10.244.14.178
other_undeleted_component_pod_names: redisc-pupldo-shard-bjt-1 redisc-pupldo-shard-bjt-0 redisc-pupldo-shard-6v7-1 redisc-pupldo-shard-6v7-0 redisc-pupldo-shard-5n9-0 redisc-pupldo-shard-5n9-1 redisc-pupldo-shard-l8g-1 redisc-pupldo-shard-l8g-0
other_undeleted_component_nodes: redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 redisc-pupldo-shard-bjt-0.redisc-pupldo-shard-bjt-headless:6379 redisc-pupldo-shard-6v7-1.redisc-pupldo-shard-6v7-headless:6379 redisc-pupldo-shard-6v7-0.redisc-pupldo-shard-6v7-headless:6379 redisc-pupldo-shard-5n9-0.redisc-pupldo-shard-5n9-headless:6379 redisc-pupldo-shard-5n9-1.redisc-pupldo-shard-5n9-headless:6379 redisc-pupldo-shard-l8g-1.redisc-pupldo-shard-l8g-headless:6379 redisc-pupldo-shard-l8g-0.redisc-pupldo-shard-l8g-headless:6379
++ echo redisc-pupldo-shard-mxw-0,redisc-pupldo-shard-mxw-1
++ tr , ' '
+ for pod_name in $(echo "$KB_CLUSTER_COMPONENT_POD_NAME_LIST" | tr ',' ' ')
++ extract_ordinal_from_pod_name redisc-pupldo-shard-mxw-0
++ local pod_name=redisc-pupldo-shard-mxw-0
++ local ordinal=0
++ echo 0
+ pod_name_ordinal=0
++ extract_pod_name_prefix redisc-pupldo-shard-mxw-0
++ local pod_name=redisc-pupldo-shard-mxw-0
+++ echo redisc-pupldo-shard-mxw-0
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-mxw
++ echo redisc-pupldo-shard-mxw
+ pod_name_prefix=redisc-pupldo-shard-mxw
+ local pod_fqdn=redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless
+ '[' 0 -eq 0 ']'
+ current_comp_default_primary_node+=(" $pod_fqdn:$port")
+ for pod_name in $(echo "$KB_CLUSTER_COMPONENT_POD_NAME_LIST" | tr ',' ' ')
++ extract_ordinal_from_pod_name redisc-pupldo-shard-mxw-1
++ local pod_name=redisc-pupldo-shard-mxw-1
++ local ordinal=1
++ echo 1
+ pod_name_ordinal=1
++ extract_pod_name_prefix redisc-pupldo-shard-mxw-1
++ local pod_name=redisc-pupldo-shard-mxw-1
+++ echo redisc-pupldo-shard-mxw-1
+++ sed 's/-[0-9]\+$//'
++ prefix=redisc-pupldo-shard-mxw
++ echo redisc-pupldo-shard-mxw
current_comp_default_primary_node:  redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379
current_comp_default_other_nodes:  redisc-pupldo-shard-mxw-1.redisc-pupldo-shard-mxw-headless:6379
+ pod_name_prefix=redisc-pupldo-shard-mxw
+ local pod_fqdn=redisc-pupldo-shard-mxw-1.redisc-pupldo-shard-mxw-headless
+ '[' 1 -eq 0 ']'
+ current_comp_default_other_nodes+=(" $pod_fqdn:$port")
+ echo 'current_comp_default_primary_node:  redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379'
+ echo 'current_comp_default_other_nodes:  redisc-pupldo-shard-mxw-1.redisc-pupldo-shard-mxw-headless:6379'
++ echo ' redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379'
++ awk '{print $1}'
+ primary_node_with_port=redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379
++ echo redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379
++ awk -F : '{print $1}'
+ primary_node_fqdn=redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless
++ get_cluster_id redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless
++ local cluster_node=redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless
++ '[' -z O3605v7HsS ']'
+++ redis-cli -h redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless -p 6379 -a O3605v7HsS cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379: No address associated with hostname
++ cluster_nodes_info=
+++ echo ''
+++ grep myself
+++ awk '{print $1}'
++ cluster_id=
++ echo ''
+ mapping_primary_cluster_id=
+ redis_cluster_check redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379
+ local cluster_node=redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379
+ '[' -z O3605v7HsS ']'
++ redis-cli --cluster check redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379 -p 6379 -a O3605v7HsS
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379: No address associated with hostname
+ check=
+ [[ '' =~ All 16384 slots covered ]]
+ false
++ find_exist_available_node
++ for node in "${other_undeleted_component_nodes[@]}"
++ redis_cluster_check redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
++ local cluster_node=redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
++ '[' -z O3605v7HsS ']'
+++ redis-cli --cluster check redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 -p 6379 -a O3605v7HsS
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-cli --cluster add-node redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 -a O3605v7HsS
++ check='10.244.14.178:6379 (da347968...) -> 0 keys | 5461 slots | 1 slaves.
10.244.14.177:6379 (4721ba78...) -> 0 keys | 5462 slots | 1 slaves.
10.244.14.176:6379 (23b75f16...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379)
S: 949e9ca289fe37ed41d17bc604189bc004dcd688 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
   slots: (0 slots) slave
   replicates 4721ba788b3fc96cd182e58b0748b689e287853e
S: bf3191d7293d377a4aad2735d7d46a76efd7b070 10.244.14.174:6379
   slots: (0 slots) slave
   replicates da347968c476e151973717d3c639651b5b32723d
S: f6e7a371b65a35558e223f511fae2519fa11dea4 10.244.14.175:6379
   slots: (0 slots) slave
   replicates 23b75f168b43e563e8b1a16e75ff73db55ab5f34
M: da347968c476e151973717d3c639651b5b32723d 10.244.14.178:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 4721ba788b3fc96cd182e58b0748b689e287853e 10.244.14.177:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 23b75f168b43e563e8b1a16e75ff73db55ab5f34 10.244.14.176:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.'
++ [[ 10.244.14.178:6379 (da347968...) -> 0 keys | 5461 slots | 1 slaves.
10.244.14.177:6379 (4721ba78...) -> 0 keys | 5462 slots | 1 slaves.
10.244.14.176:6379 (23b75f16...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379)
S: 949e9ca289fe37ed41d17bc604189bc004dcd688 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
   slots: (0 slots) slave
   replicates 4721ba788b3fc96cd182e58b0748b689e287853e
S: bf3191d7293d377a4aad2735d7d46a76efd7b070 10.244.14.174:6379
   slots: (0 slots) slave
   replicates da347968c476e151973717d3c639651b5b32723d
S: f6e7a371b65a35558e223f511fae2519fa11dea4 10.244.14.175:6379
   slots: (0 slots) slave
   replicates 23b75f168b43e563e8b1a16e75ff73db55ab5f34
M: da347968c476e151973717d3c639651b5b32723d 10.244.14.178:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 4721ba788b3fc96cd182e58b0748b689e287853e 10.244.14.177:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 23b75f168b43e563e8b1a16e75ff73db55ab5f34 10.244.14.176:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered. =~ All 16384 slots covered ]]
++ true
++ echo redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
++ return
+ available_node=redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
+ '[' -z redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 ']'
+ for current_comp_default_primary_node in $current_comp_default_primary_node
+ '[' -z O3605v7HsS ']'
+ echo 'redis-cli --cluster add-node redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 -a O3605v7HsS'
+ redis-cli --cluster add-node redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379 -a O3605v7HsS
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379: No address associated with hostname
>>> Adding node redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379 to cluster redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
>>> Performing Cluster Check (using node redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379)
S: 949e9ca289fe37ed41d17bc604189bc004dcd688 redisc-pupldo-shard-bjt-1.redisc-pupldo-shard-bjt-headless:6379
   slots: (0 slots) slave
   replicates 4721ba788b3fc96cd182e58b0748b689e287853e
S: bf3191d7293d377a4aad2735d7d46a76efd7b070 10.244.14.174:6379
   slots: (0 slots) slave
   replicates da347968c476e151973717d3c639651b5b32723d
S: f6e7a371b65a35558e223f511fae2519fa11dea4 10.244.14.175:6379
   slots: (0 slots) slave
   replicates 23b75f168b43e563e8b1a16e75ff73db55ab5f34
M: da347968c476e151973717d3c639651b5b32723d 10.244.14.178:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 4721ba788b3fc96cd182e58b0748b689e287853e 10.244.14.177:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 23b75f168b43e563e8b1a16e75ff73db55ab5f34 10.244.14.176:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[ERR] Sorry, can't connect to node redisc-pupldo-shard-mxw-0.redisc-pupldo-shard-mxw-headless:6379

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

github-actions[bot] commented 2 months ago

This issue has been marked as stale because it has been open for 30 days with no activity

Y-Rookie commented 3 weeks ago

Currently, the 0.9 API does not support adding multiple shards at once, but this feature will be supported in future versions.