kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.37k stars 550 forks source link

panic: runtime error: index out of range [0] with length 0 #1180

Open zjialin opened 2 years ago

zjialin commented 2 years ago

What is version of KubeKey has the issue?

v2.0.0

What is your os environment?

centos7.9

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.0.44, internalAddress: 192.168.0.44, user: root, password: "iccfis+123"}
  - {name: node1, address: 192.168.0.42, internalAddress: 192.168.0.42, user: root, password: "iccfis+123"}
  - {name: node2, address: 192.168.0.43, internalAddress: 192.168.0.43, user: root, password: "iccfis+123"}
  roleGroups:
    etcd:
    - node1
    master:
    - master
    worker:
    - node1
    - node2
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.21.5
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.2.1
spec:
  persistence:
    storageClass: ""       
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""        
  etcd:
    monitoring: false      
    endpointIps: localhost  
    port: 2379             
    tlsEnable: true
  common:
    redis:
      enabled: false
    redisVolumSize: 2Gi 
    openldap:
      enabled: false
    openldapVolumeSize: 2Gi  
    minioVolumeSize: 20Gi
    monitoring:
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
    es:  
      elasticsearchMasterVolumeSize: 4Gi   
      elasticsearchDataVolumeSize: 20Gi   
      logMaxAge: 7          
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchUrl: ""
      externalElasticsearchPort: ""  
  console:
    enableMultiLogin: true 
    port: 30880
  alerting:       
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:    
    enabled: false
  devops:           
    enabled: false
    jenkinsMemoryLim: 2Gi     
    jenkinsMemoryReq: 1500Mi 
    jenkinsVolumeSize: 8Gi   
    jenkinsJavaOpts_Xms: 512m  
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:          
    enabled: false
    ruler:
      enabled: true
      replicas: 2
  logging:         
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
  metrics_server:             
    enabled: false
  monitoring:
    storageClass: ""
    prometheusMemoryRequest: 400Mi  
    prometheusVolumeSize: 20Gi  
  multicluster:
    clusterRole: none 
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:    
    enabled: false  
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress: 
          - ""           
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

A clear and concise description of what happend.

12:22:34 CST [DeployStorageClassModule] Generate OpenEBS manifest panic: runtime error: index out of range [0] with length 0

goroutine 1068 [running]: github.com/kubesphere/kubekey/pkg/plugins/storage.(CheckDefaultStorageClass).PreCheck(0xc0001418e0, {0x257c0c0, 0xc0005ee780}) /home/runner/work/kubekey/kubekey/pkg/plugins/storage/prepares.go:39 +0x1a5 github.com/kubesphere/kubekey/pkg/core/prepare.(PrepareCollection).PreCheck(0xc000857860, {0x257c0c0, 0xc0005ee780}) /home/runner/work/kubekey/kubekey/pkg/core/prepare/base.go:51 +0x88 github.com/kubesphere/kubekey/pkg/core/task.(RemoteTask).When(0x21b747d, {0x257c0c0, 0xc0005ee780}) /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:179 +0x31 github.com/kubesphere/kubekey/pkg/core/task.(RemoteTask).WhenWithRetry(0xc0007389c0, {0x257c0c0, 0xc0005ee780}) /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:191 +0xd9 github.com/kubesphere/kubekey/pkg/core/task.(RemoteTask).Run(0xc0007389c0, {0x257c0c0, 0xc0005ee780}, {0x2584c78, 0xc000413990}, 0xc0000cd3f0, 0xc0000cd600) /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:138 +0xf9 created by github.com/kubesphere/kubekey/pkg/core/task.(RemoteTask).RunWithTimeout /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:109 +0x16f

Relevant log output

12:22:34 CST [DeployStorageClassModule] Generate OpenEBS manifest
panic: runtime error: index out of range [0] with length 0

goroutine 1068 [running]:
github.com/kubesphere/kubekey/pkg/plugins/storage.(*CheckDefaultStorageClass).PreCheck(0xc0001418e0, {0x257c0c0, 0xc0005ee780})
        /home/runner/work/kubekey/kubekey/pkg/plugins/storage/prepares.go:39 +0x1a5
github.com/kubesphere/kubekey/pkg/core/prepare.(*PrepareCollection).PreCheck(0xc000857860, {0x257c0c0, 0xc0005ee780})
        /home/runner/work/kubekey/kubekey/pkg/core/prepare/base.go:51 +0x88
github.com/kubesphere/kubekey/pkg/core/task.(*RemoteTask).When(0x21b747d, {0x257c0c0, 0xc0005ee780})
        /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:179 +0x31
github.com/kubesphere/kubekey/pkg/core/task.(*RemoteTask).WhenWithRetry(0xc0007389c0, {0x257c0c0, 0xc0005ee780})
        /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:191 +0xd9
github.com/kubesphere/kubekey/pkg/core/task.(*RemoteTask).Run(0xc0007389c0, {0x257c0c0, 0xc0005ee780}, {0x2584c78, 0xc000413990}, 0xc0000cd3f0, 0xc0000cd600)
        /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:138 +0xf9
created by github.com/kubesphere/kubekey/pkg/core/task.(*RemoteTask).RunWithTimeout
        /home/runner/work/kubekey/kubekey/pkg/core/task/remote_task.go:109 +0x16f

Additional information

No response

24sama commented 2 years ago

Hi @zjialin, thanks for reporting this bug. Can you use the following command when this panic happens? And it will be helpful for us to fix this bug.

kubectl get sc --no-headers | grep '(default)' | wc -l