kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.37k stars 550 forks source link

使用kk安装kubesphere 3.4.0超时 #1960

Closed yangliuyu closed 1 year ago

yangliuyu commented 1 year ago

What is version of KubeKey has the issue?

kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.10", GitCommit:"3e381c6d5556117d132326b58c5177e0b0e839b6", GitTreeState:"clean", BuildDate:"2023-07-28T06:08:59Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

What is your os environment?

ubuntu 20.04.6 LTS

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: kubesphere-k8s
spec:
  hosts:
  - {name: kubesphere1, address: 172.6.3.2, internalAddress: 172.6.3.2, user: root, password: ""}
  - {name: kubesphere2, address: 172.6.3.3, internalAddress: 172.6.3.3, user: root, password: ""}
  - {name: kubesphere3, address: 172.6.3.4, internalAddress: 172.6.3.4, user: root, password: ""}
  roleGroups:
    etcd:
    - kubesphere2
    control-plane: 
    - kubesphere2
    worker:
    - kubesphere1
    - kubesphere2
    - kubesphere3
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere-k8s.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.26.5
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: [""]
    insecureRegistries: []
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.0
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: true
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: true
    # resources: {}
    jenkinsMemoryLim: 4Gi
    jenkinsMemoryReq: 3000Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 2400m
    jenkinsJavaOpts_Xmx: 3200m
    jenkinsJavaOpts_MaxRAM: 4g
  events:
    enabled: true
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: true
    ippool:
      type: none
    topology:
      type: weave-scope
  openpitrix:
    store:
      enabled: true
  servicemesh:
    enabled: true
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

A clear and concise description of what happend.

使用kk安装kubesphere 3.4.0,一直卡到超时。删除3.4.0集群,重新安装3.3.2集群是没问题的 3.3.2对应kubernetes 1.22.17, docker 20.10.8 3.4.0对应kubernetes 1.26.4, conatinerd v1.6.4

image image

Relevant log output

16:41:41 CST [DeployKubeSphereModule] Apply ks-installer
16:41:41 CST stdout: [kubesphere2]
namespace/kubesphere-system created
serviceaccount/ks-installer created
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
16:41:41 CST success: [kubesphere2]
16:41:41 CST [DeployKubeSphereModule] Add config to ks-installer manifests
16:41:41 CST success: [kubesphere2]
16:41:41 CST [DeployKubeSphereModule] Create the kubesphere namespace
16:41:41 CST success: [kubesphere2]
16:41:41 CST [DeployKubeSphereModule] Setup ks-installer config
16:41:41 CST stdout: [kubesphere2]
secret/kube-etcd-client-certs created
16:41:41 CST success: [kubesphere2]
16:41:41 CST [DeployKubeSphereModule] Apply ks-installer
16:41:43 CST stdout: [kubesphere2]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
16:41:43 CST success: [kubesphere2]
Please wait for the installation to complete:   >>--->
18:41:43 CST failed: [kubesphere2]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CheckResultModule] exec failed:
failed: [kubesphere2] execute task timeout, Timeout=2h

Additional information

root@kubesphere2:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f 2023-08-21T16:41:44+08:00 INFO : shell-operator latest 2023-08-21T16:41:44+08:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115 2023-08-21T16:41:44+08:00 INFO : Use temporary dir: /tmp/shell-operator 2023-08-21T16:41:44+08:00 INFO : Initialize hooks manager ... 2023-08-21T16:41:44+08:00 INFO : Search and load hooks ... 2023-08-21T16:41:44+08:00 INFO : Load hook config from '/hooks/kubesphere/installRunner.py' 2023-08-21T16:41:45+08:00 INFO : Load hook config from '/hooks/kubesphere/schedule.sh' 2023-08-21T16:41:45+08:00 INFO : Initializing schedule manager ... 2023-08-21T16:41:45+08:00 INFO : KUBE Init Kubernetes client 2023-08-21T16:41:45+08:00 INFO : KUBE-INIT Kubernetes client is configured successfully 2023-08-21T16:41:45+08:00 INFO : MAIN: run main loop 2023-08-21T16:41:45+08:00 INFO : MAIN: add onStartup tasks 2023-08-21T16:41:45+08:00 INFO : MSTOR Create new metric shell_operator_live_ticks 2023-08-21T16:41:45+08:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length 2023-08-21T16:41:45+08:00 INFO : Running schedule manager ... 2023-08-21T16:41:45+08:00 INFO : QUEUE add all HookRun@OnStartup 2023-08-21T16:41:45+08:00 INFO : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations 2023-08-21T16:41:45+08:00 INFO : EVENT Kube event '14291da2-b83a-4dd7-b807-7a9d9faae583' 2023-08-21T16:41:45+08:00 INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py 2023-08-21T16:41:48+08:00 INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py 2023-08-21T16:41:48+08:00 INFO : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ... Delete old cluster configuration successfully Create cluster configuration successfully 2023-08-21T16:41:48+08:00 INFO : EVENT Kube event '14291da2-b83a-4dd7-b807-7a9d9faae583' 2023-08-21T16:41:48+08:00 INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py 2023-08-21T16:41:51+08:00 INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py 2023-08-21T16:41:51+08:00 INFO : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ... [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [preinstall : KubeSphere | Stopping if Kubernetes version is nonsupport] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" }

TASK [preinstall : KubeSphere | Checking StorageClass] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if StorageClass was not found] **** skipping: [localhost]

TASK [preinstall : KubeSphere | Checking default StorageClass] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if default StorageClass was not found] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" }

TASK [preinstall : KubeSphere | Stop if bad admin password] **** skipping: [localhost]

TASK [preinstall : KubeSphere | Checking KubeSphere component] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] ** skipping: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] ** skipping: [localhost] => (item=ks-openldap) skipping: [localhost] => (item=ks-redis) skipping: [localhost] => (item=ks-minio) skipping: [localhost] => (item=ks-openpitrix) skipping: [localhost] => (item=elasticsearch-logging) skipping: [localhost] => (item=elasticsearch-logging-curator) skipping: [localhost] => (item=istio) skipping: [localhost] => (item=istio-init) skipping: [localhost] => (item=jaeger-operator) skipping: [localhost] => (item=ks-jenkins) skipping: [localhost] => (item=ks-sonarqube) skipping: [localhost] => (item=logging-fluentbit-operator) skipping: [localhost] => (item=uc) skipping: [localhost] => (item=metrics-server)

PLAY RECAP ***** localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [Metrics-Server | Getting metrics-server installation files] ** changed: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] **** changed: [localhost] => (item={'file': 'metrics-server.yaml'})

TASK [metrics-server : Metrics-Server | Checking Metrics-Server] *** changed: [localhost]

TASK [Metrics-Server | Uninstalling old metrics-server] **** skipping: [localhost]

TASK [Metrics-Server | Installing new metrics-server] ** changed: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for metrics.k8s.io ready] ***** FAILED - RETRYING: Metrics-Server | Waitting for metrics.k8s.io ready (60 retries left). changed: [localhost]

TASK [Metrics-Server | Importing metrics-server status] **** changed: [localhost]

PLAY RECAP ***** localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** skipping: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [common : KubeSphere | Checking kube-node-lease namespace] **** changed: [localhost]

TASK [common : KubeSphere | Getting system namespaces] ***** ok: [localhost]

TASK [common : set_fact] *** ok: [localhost]

TASK [common : debug] ** ok: [localhost] => { "msg": [ "kubesphere-system", "kubesphere-controls-system", "kubesphere-monitoring-system", "kubesphere-monitoring-federated", "kube-node-lease", "kubesphere-logging-system", "kubesphere-devops-system", "istio-system" ] }

TASK [common : KubeSphere | Creating KubeSphere namespace] ***** changed: [localhost] => (item=kubesphere-system) changed: [localhost] => (item=kubesphere-controls-system) changed: [localhost] => (item=kubesphere-monitoring-system) changed: [localhost] => (item=kubesphere-monitoring-federated) changed: [localhost] => (item=kube-node-lease) changed: [localhost] => (item=kubesphere-logging-system) changed: [localhost] => (item=kubesphere-devops-system) changed: [localhost] => (item=istio-system)

TASK [common : KubeSphere | Labeling system-workspace] ***** changed: [localhost] => (item=default) changed: [localhost] => (item=kube-public) changed: [localhost] => (item=kube-system) changed: [localhost] => (item=kubesphere-system) changed: [localhost] => (item=kubesphere-controls-system) changed: [localhost] => (item=kubesphere-monitoring-system) changed: [localhost] => (item=kubesphere-monitoring-federated) changed: [localhost] => (item=kube-node-lease) changed: [localhost] => (item=kubesphere-logging-system) changed: [localhost] => (item=kubesphere-devops-system) changed: [localhost] => (item=istio-system)

TASK [common : KubeSphere | Labeling namespace for network policy] ***** changed: [localhost]

TASK [common : KubeSphere | Getting Kubernetes master num] ***** changed: [localhost]

TASK [common : KubeSphere | Setting master num] **** ok: [localhost]

TASK [KubeSphere | Getting common component installation files] **** changed: [localhost] => (item=common)

TASK [common : KubeSphere | Checking Kubernetes version] *** changed: [localhost]

TASK [KubeSphere | Getting common component installation files] **** changed: [localhost] => (item=snapshot-controller)

TASK [common : KubeSphere | Creating snapshot controller values] *** changed: [localhost] => (item={'name': 'custom-values-snapshot-controller', 'file': 'custom-values-snapshot-controller.yaml'})

TASK [common : KubeSphere | Updating snapshot crd] ***** changed: [localhost]

TASK [common : KubeSphere | Deploying snapshot controller] ***** changed: [localhost]

TASK [KubeSphere | Checking openpitrix common component] *** changed: [localhost]

TASK [common : include_tasks] ** skipping: [localhost] => (item={'op': 'openpitrix-db', 'ks': 'mysql-pvc'}) skipping: [localhost] => (item={'op': 'openpitrix-etcd', 'ks': 'etcd-pvc'})

TASK [common : Getting PersistentVolumeName (mysql)] *** skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] *** skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] *** skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] *** skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] **** skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] **** skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] **** skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] **** skipping: [localhost]

TASK [common : KubeSphere | Checking mysql PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting mysql db pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking redis PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting redis db pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking minio PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting minio pv size] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap PersistentVolumeClaim] *** changed: [localhost]

TASK [common : KubeSphere | Setting openldap pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking etcd db PersistentVolumeClaim] **** changed: [localhost]

TASK [common : KubeSphere | Setting etcd pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking redis ha PersistentVolumeClaim] *** changed: [localhost]

TASK [common : KubeSphere | Setting redis ha pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking es-master PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting es master pv size] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking es data PersistentVolumeClaim] **** changed: [localhost]

TASK [common : KubeSphere | Setting es data pv size] *** skipping: [localhost]

TASK [common : KubeSphere | Checking opensearch-master PersistentVolumeClaim] *** changed: [localhost]

TASK [common : KubeSphere | Setting opensearch master pv size] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking opensearch data PersistentVolumeClaim] **** changed: [localhost]

TASK [common : KubeSphere | Setting opensearch data pv size] *** skipping: [localhost]

TASK [KubeSphere | Creating common component manifests] **** changed: [localhost] => (item={'path': 'redis', 'file': 'redis.yaml'})

TASK [common : KubeSphere | Deploying etcd and mysql] ** skipping: [localhost] => (item=etcd.yaml) skipping: [localhost] => (item=mysql.yaml)

TASK [common : KubeSphere | Getting minio installation files] ** skipping: [localhost] => (item=minio-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'})

TASK [common : KubeSphere | Checking minio] **** skipping: [localhost]

TASK [common : KubeSphere | Deploying minio] *** skipping: [localhost]

TASK [common : debug] ** skipping: [localhost]

TASK [common : fail] *** skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] **** skipping: [localhost]

TASK [common : KubeSphere | Generet Random password] *** skipping: [localhost]

TASK [common : KubeSphere | Creating Redis Password Secret] **** skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ** skipping: [localhost] => (item=redis-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'})

TASK [common : KubeSphere | Checking old redis status] ***** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost] => (item=redis.yaml)

TASK [common : KubeSphere | Importing redis status] **** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] *** skipping: [localhost] => (item=openldap-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'})

TASK [common : KubeSphere | Checking old openldap status] ** skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] ** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Loading old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] *** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] ** skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] *** skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *** skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] ***** skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking KubeSphere Config is Exists] ** changed: [localhost]

TASK [common : KubeSphere | Generet Random password] *** skipping: [localhost]

TASK [common : KubeSphere | Creating Redis Password Secret] **** skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ** skipping: [localhost] => (item=redis-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'})

TASK [common : KubeSphere | Checking old redis status] ***** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost] => (item=redis.yaml)

TASK [common : KubeSphere | Importing redis status] **** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] *** changed: [localhost] => (item=openldap-ha)

TASK [common : KubeSphere | Creating manifests] **** changed: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'})

TASK [common : KubeSphere | Checking old openldap status] ** changed: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] ** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] ***** changed: [localhost]

TASK [common : KubeSphere | Deploying openldap] **** changed: [localhost]

TASK [common : KubeSphere | Loading old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] *** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] ** skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] *** skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *** skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] ***** skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] ***** changed: [localhost]

TASK [common : KubeSphere | Getting minio installation files] ** changed: [localhost] => (item=minio-ha)

TASK [common : KubeSphere | Creating manifests] **** changed: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'})

TASK [common : KubeSphere | Checking minio] **** changed: [localhost]

TASK [common : KubeSphere | Deploying minio] *** changed: [localhost]

TASK [common : debug] ** skipping: [localhost]

TASK [common : fail] *** skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] **** changed: [localhost]

TASK [common : KubeSphere | Getting curator installation files] **** skipping: [localhost]

TASK [common : KubeSphere | Creating custom manifests] ***** skipping: [localhost] => (item={'name': 'custom-values-elasticsearch-curator', 'file': 'custom-values-elasticsearch-curator.yaml'})

TASK [common : KubeSphere | Creating elasticsearch credentials secret] ***** skipping: [localhost]

TASK [common : KubeSphere | Getting Elasticsearch host] **** skipping: [localhost]

TASK [common : KubeSphere | Importing es status] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging-curator] *** skipping: [localhost]

TASK [common : KubeSphere | Getting opensearch and curator installation files] *** changed: [localhost]

TASK [common : KubeSphere | Creating custom manifests] ***** changed: [localhost] => (item={'name': 'custom-values-opensearch-master', 'file': 'custom-values-opensearch-master.yaml'}) changed: [localhost] => (item={'name': 'custom-values-opensearch-data', 'file': 'custom-values-opensearch-data.yaml'}) failed: [localhost] (item={'name': 'custom-values-opensearch-curator', 'file': 'custom-values-opensearch-curator.yaml'}) => {"ansible_loop_var": "item", "changed": false, "item": {"file": "custom-values-opensearch-curator.yaml", "name": "custom-values-opensearch-curator"}, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'opensearchPrefix'"} changed: [localhost] => (item={'name': 'custom-values-opensearch-dashboard', 'file': 'custom-values-opensearch-dashboard.yaml'})

PLAY RECAP ***** localhost : ok=41 changed=36 unreachable=0 failed=1 skipped=82 rescued=0 ignored=0

pixiake commented 1 year ago

You can remove ClusterConfiguration in the configuration file. And execute kk create cluster -f config-sample.yaml --with-kubesphere v3.4.0

yangliuyu commented 1 year ago

You can remove ClusterConfiguration in the configuration file. And execute kk create cluster -f config-sample.yaml --with-kubesphere v3.4.0

It works now. Thanks.