kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.28k stars 538 forks source link

kk3.0.2安装kubesphere v3.3.0 kubernetes v1.25.3问题 #1904

Open fenlin88l opened 1 year ago

fenlin88l commented 1 year ago

What is version of KubeKey has the issue?

./kk version kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.2", GitCommit:"1c395d22e75528d0a7d07c40e1af4830de265a23", GitTreeState:"clean", BuildDate:"2022-11-22T02:04:26Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

What is your os environment?

centos7 内核3.10.0-1160.an7.x86_64

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 100.x.x.x, internalAddress: 100.x.x.x, user: xxx, password: "xxxxxx"}
  - {name: node01, address: 100.x.x.x, internalAddress: 100.x.x.x, user: xxx, password: "xxxxxx"}
  - {name: node02, address: 100.x.x.x, internalAddress: 100.x.x.x, user: xxx, password: "xxxxxx"}
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - node01
    - node02
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    #type: harbor
    #auths:
    #   host: 
    #    port: 8086
    #    username: admin
    #    password: Harbor12345
    #    certsPath: "/etc/docker/certs.d/dockerhub.kubekey.local"
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []
  #- name: nfs-client
  #  namespace: kube-system
  #  sources:
  #    chart:
  #      name: nfs-client-provisioner
  #      repo: https://charts.kubesphere.io/main
  #      values:
  #      - nfs.server=
  #      - nfs.path=/nfs/data
  #      - storageClass.defaultClass=true

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.0
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 1200m
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

A clear and concise description of what happend.

相关配置: apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.3.0 spec: persistence: storageClass: "" authentication: jwtSecret: "" zone: "" local_registry: "" namespace_override: ""

dev_tag: ""

etcd: monitoring: false endpointIps: localhost port: 2379

Relevant log output


./kk create config --with-kubernetes v1.25.3  --with-kubesphere v3.3.0

./kk create cluster -f config-sample.yaml --container-manager containerd 
安装到这一步后  Please wait for the installation to complete:  >>---> 
查日志提示如下:
16:11:37 CST failed: [master]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CheckResultModule] exec failed:failed: [master] execute task timeout, Timeout=2h

Error: failed to fetch group version resources batch/v1beta1: the server could not find the requested resource
2023/07/05 16:31:19 failed to fetch group version resources batch/v1beta1: the server could not find the requested resource

11:55:49 CST success: [master]
11:55:49 CST [CheckResultModule] Check ks-installer result
13:55:49 CST failed: [master]

登录KubeSphere页面
登录控制台提示:request to http://ks-apiserver/oauth/token failed, reason: connect ECONNREFUSED 10.233.8.38:80
![image](https://github.com/kubesphere/kubekey/assets/34697390/e700054e-25cc-46a4-a58b-0a8df7fd4704)

### Additional information

_No response_
redscholar commented 1 year ago

From your logs, it seems that the installation of KubeSphere has failed. KubeSphere is installed using ks-installer, and the ks-installer pod is in the kubesphere-system namespace. Can you provide the logs of the ks-installer pod?

fenlin88l commented 1 year ago

从您的日志来看,KubeSphere 安装失败。KubeSphere 使用 进行安装ks-installer,并且ks-installerpod 位于kubesphere-system命名空间中。能提供一下pod的日志ks-installer吗?

[root@# kubectl 日志 -f pod/ks-installer-555b855cdc-gtcjx -n kubesphere-system 2023-07-06T17:13:59+08:00 INFO : shell-operator latest 2023-07-06T17:13:59+08:00 INFO : Use temporary dir: /tmp/shell-operator 2023-07-06T17:13:59+08:00 INFO : Initialize hooks manager ... 2023-07-06T17:13:59+08:00 INFO : Search and load hooks ... 2023-07-06T17:13:59+08:00 INFO : HTTP SERVER Listening on 0.0.0.0:9115 2023-07-06T17:13:59+08:00 INFO : Load hook config from '/hooks/kubesphere/installRunner.py' 2023-07-06T17:14:00+08:00 INFO : Load hook config from '/hooks/kubesphere/schedule.sh' 2023-07-06T17:14:00+08:00 INFO : Initializing schedule manager ... 2023-07-06T17:14:00+08:00 INFO : KUBE Init Kubernetes client 2023-07-06T17:14:00+08:00 INFO : KUBE-INIT Kubernetes client is configured successfully 2023-07-06T17:14:00+08:00 INFO : MAIN: run main loop 2023-07-06T17:14:00+08:00 INFO : MAIN: add onStartup tasks 2023-07-06T17:14:00+08:00 INFO : Running schedule manager ... 2023-07-06T17:14:00+08:00 INFO : QUEUE add all HookRun@OnStartup 2023-07-06T17:14:00+08:00 INFO : MSTOR Create new metric shell_operator_live_ticks 2023-07-06T17:14:00+08:00 INFO : MSTOR Create new metric shell_operator_tasks_queue_length 2023-07-06T17:14:00+08:00 INFO : GVR for kind 'ClusterConfiguration' is installer.kubesphere.io/v1alpha1, Resource=clusterconfigurations 2023-07-06T17:14:00+08:00 INFO : EVENT Kube event '768ceec5-268d-4eb5-b38c-8e6c1c52d020' 2023-07-06T17:14:00+08:00 INFO : QUEUE add TASK_HOOK_RUN@KUBE_EVENTS kubesphere/installRunner.py 2023-07-06T17:14:03+08:00 INFO : TASK_RUN HookRun@KUBE_EVENTS kubesphere/installRunner.py 2023-07-06T17:14:03+08:00 INFO : Running hook 'kubesphere/installRunner.py' binding 'KUBE_EVENTS' ... [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [preinstall : KubeSphere | Stopping if Kubernetes version is nonsupport] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" }

TASK [preinstall : KubeSphere | Checking StorageClass] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if StorageClass was not found] **** skipping: [localhost]

TASK [preinstall : KubeSphere | Checking default StorageClass] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Stopping if default StorageClass was not found] *** ok: [localhost] => { "changed": false, "msg": "All assertions passed" }

TASK [preinstall : KubeSphere | Checking KubeSphere component] ***** changed: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] ** skipping: [localhost]

TASK [preinstall : KubeSphere | Getting KubeSphere component version] ** skipping: [localhost] => (item=ks-openldap) skipping: [localhost] => (item=ks-redis) skipping: [localhost] => (item=ks-minio) skipping: [localhost] => (item=ks-openpitrix) skipping: [localhost] => (item=elasticsearch-logging) skipping: [localhost] => (item=elasticsearch-logging-curator) skipping: [localhost] => (item=istio) skipping: [localhost] => (item=istio-init) skipping: [localhost] => (item=jaeger-operator) skipping: [localhost] => (item=ks-jenkins) skipping: [localhost] => (item=ks-sonarqube) skipping: [localhost] => (item=logging-fluentbit-operator) skipping: [localhost] => (item=uc) skipping: [localhost] => (item=metrics-server)

PLAY RECAP ***** localhost : ok=7 changed=3 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [Metrics-Server | Getting metrics-server installation files] ** skipping: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] **** skipping: [localhost] => (item={'file': 'metrics-server.yaml'})

TASK [metrics-server : Metrics-Server | Checking Metrics-Server] *** skipping: [localhost]

TASK [Metrics-Server | Uninstalling old metrics-server] **** skipping: [localhost]

TASK [Metrics-Server | Installing new metrics-server] ** skipping: [localhost]

TASK [metrics-server : Metrics-Server | Waitting for metrics.k8s.io ready] ***** skipping: [localhost]

TASK [Metrics-Server | Importing metrics-server status] **** skipping: [localhost]

PLAY RECAP ***** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=9 rescued=0 ignored=0 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [common : KubeSphere | Checking kube-node-lease namespace] **** changed: [localhost]

TASK [common : KubeSphere | Getting system namespaces] ***** ok: [localhost]

TASK [common : set_fact] *** ok: [localhost]

TASK [common : debug] ** ok: [localhost] => { "msg": [ "kubesphere-system", "kubesphere-controls-system", "kubesphere-monitoring-system", "kubesphere-monitoring-federated", "kube-node-lease" ] }

TASK [common : KubeSphere | Creating KubeSphere namespace] ***** changed: [localhost] => (item=kubesphere-system) changed: [localhost] => (item=kubesphere-controls-system) changed: [localhost] => (item=kubesphere-monitoring-system) changed: [localhost] => (item=kubesphere-monitoring-federated) changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Labeling system-workspace] ***** changed: [localhost] => (item=default) changed: [localhost] => (item=kube-public) changed: [localhost] => (item=kube-system) changed: [localhost] => (item=kubesphere-system) changed: [localhost] => (item=kubesphere-controls-system) changed: [localhost] => (item=kubesphere-monitoring-system) changed: [localhost] => (item=kubesphere-monitoring-federated) changed: [localhost] => (item=kube-node-lease)

TASK [common : KubeSphere | Labeling namespace for network policy] ***** changed: [localhost]

TASK [common : KubeSphere | Getting Kubernetes master num] ***** changed: [localhost]

TASK [common : KubeSphere | Setting master num] **** ok: [localhost]

TASK [KubeSphere | Getting common component installation files] **** changed: [localhost] => (item=common)

TASK [common : KubeSphere | Checking Kubernetes version] *** changed: [localhost]

TASK [KubeSphere | Getting common component installation files] **** changed: [localhost] => (item=snapshot-controller)

TASK [common : KubeSphere | Creating snapshot controller values] *** changed: [localhost] => (item={'name': 'custom-values-snapshot-controller', 'file': 'custom-values-snapshot-controller.yaml'})

TASK [common : KubeSphere | Updating snapshot crd] ***** changed: [localhost]

TASK [common : KubeSphere | Deploying snapshot controller] ***** changed: [localhost]

TASK [KubeSphere | Checking openpitrix common component] *** changed: [localhost]

TASK [common : include_tasks] ** skipping: [localhost] => (item={'op': 'openpitrix-db', 'ks': 'mysql-pvc'}) skipping: [localhost] => (item={'op': 'openpitrix-etcd', 'ks': 'etcd-pvc'})

TASK [common : Getting PersistentVolumeName (mysql)] *** skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (mysql)] *** skipping: [localhost]

TASK [common : Setting PersistentVolumeName (mysql)] *** skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (mysql)] *** skipping: [localhost]

TASK [common : Getting PersistentVolumeName (etcd)] **** skipping: [localhost]

TASK [common : Getting PersistentVolumeSize (etcd)] **** skipping: [localhost]

TASK [common : Setting PersistentVolumeName (etcd)] **** skipping: [localhost]

TASK [common : Setting PersistentVolumeSize (etcd)] **** skipping: [localhost]

TASK [common : KubeSphere | Checking mysql PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting mysql db pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking redis PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting redis db pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking minio PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting minio pv size] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap PersistentVolumeClaim] *** changed: [localhost]

TASK [common : KubeSphere | Setting openldap pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking etcd db PersistentVolumeClaim] **** changed: [localhost]

TASK [common : KubeSphere | Setting etcd pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking redis ha PersistentVolumeClaim] *** changed: [localhost]

TASK [common : KubeSphere | Setting redis ha pv size] ** skipping: [localhost]

TASK [common : KubeSphere | Checking es-master PersistentVolumeClaim] ** changed: [localhost]

TASK [common : KubeSphere | Setting es master pv size] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking es data PersistentVolumeClaim] **** changed: [localhost]

TASK [common : KubeSphere | Setting es data pv size] *** skipping: [localhost]

TASK [KubeSphere | Creating common component manifests] **** changed: [localhost] => (item={'path': 'redis', 'file': 'redis.yaml'})

TASK [common : KubeSphere | Deploying etcd and mysql] ** skipping: [localhost] => (item=etcd.yaml) skipping: [localhost] => (item=mysql.yaml)

TASK [common : KubeSphere | Getting minio installation files] ** skipping: [localhost] => (item=minio-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'})

TASK [common : KubeSphere | Checking minio] **** skipping: [localhost]

TASK [common : KubeSphere | Deploying minio] *** skipping: [localhost]

TASK [common : debug] ** skipping: [localhost]

TASK [common : fail] *** skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] **** skipping: [localhost]

TASK [common : KubeSphere | Generet Random password] *** skipping: [localhost]

TASK [common : KubeSphere | Creating Redis Password Secret] **** skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ** skipping: [localhost] => (item=redis-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'})

TASK [common : KubeSphere | Checking old redis status] ***** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost] => (item=redis.yaml)

TASK [common : KubeSphere | Importing redis status] **** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] *** skipping: [localhost] => (item=openldap-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'})

TASK [common : KubeSphere | Checking old openldap status] ** skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] ** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Loading old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] *** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] ** skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] *** skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *** skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] ***** skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking KubeSphere Config is Exists] ** changed: [localhost]

TASK [common : KubeSphere | Generet Random password] *** skipping: [localhost]

TASK [common : KubeSphere | Creating Redis Password Secret] **** skipping: [localhost]

TASK [common : KubeSphere | Getting redis installation files] ** skipping: [localhost] => (item=redis-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-redis', 'file': 'custom-values-redis.yaml'})

TASK [common : KubeSphere | Checking old redis status] ***** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old redis svc] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying redis] *** skipping: [localhost] => (item=redis.yaml)

TASK [common : KubeSphere | Importing redis status] **** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap installation files] *** skipping: [localhost] => (item=openldap-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-openldap', 'file': 'custom-values-openldap.yaml'})

TASK [common : KubeSphere | Checking old openldap status] ** skipping: [localhost]

TASK [common : KubeSphere | Shutdown ks-account] *** skipping: [localhost]

TASK [common : KubeSphere | Deleting and backup old openldap svc] ** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap] ***** skipping: [localhost]

TASK [common : KubeSphere | Deploying openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Loading old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking openldap-ha status] *** skipping: [localhost]

TASK [common : KubeSphere | Getting openldap-ha pod list] ** skipping: [localhost]

TASK [common : KubeSphere | Getting old openldap data] ***** skipping: [localhost]

TASK [common : KubeSphere | Migrating openldap data] *** skipping: [localhost]

TASK [common : KubeSphere | Disabling old openldap] **** skipping: [localhost]

TASK [common : KubeSphere | Restarting openldap] *** skipping: [localhost]

TASK [common : KubeSphere | Restarting ks-account] ***** skipping: [localhost]

TASK [common : KubeSphere | Importing openldap status] ***** skipping: [localhost]

TASK [common : KubeSphere | Getting minio installation files] ** skipping: [localhost] => (item=minio-ha)

TASK [common : KubeSphere | Creating manifests] **** skipping: [localhost] => (item={'name': 'custom-values-minio', 'file': 'custom-values-minio.yaml'})

TASK [common : KubeSphere | Checking minio] **** skipping: [localhost]

TASK [common : KubeSphere | Deploying minio] *** skipping: [localhost]

TASK [common : debug] ** skipping: [localhost]

TASK [common : fail] *** skipping: [localhost]

TASK [common : KubeSphere | Importing minio status] **** skipping: [localhost]

TASK [common : KubeSphere | Getting elasticsearch and curator installation files] *** skipping: [localhost]

TASK [common : KubeSphere | Creating custom manifests] ***** skipping: [localhost] => (item={'name': 'custom-values-elasticsearch', 'file': 'custom-values-elasticsearch.yaml'}) skipping: [localhost] => (item={'name': 'custom-values-elasticsearch-curator', 'file': 'custom-values-elasticsearch-curator.yaml'})

TASK [common : KubeSphere | Checking elasticsearch data StatefulSet] *** skipping: [localhost]

TASK [common : KubeSphere | Checking elasticsearch storageclass] *** skipping: [localhost]

TASK [common : KubeSphere | Commenting elasticsearch storageclass parameter] *** skipping: [localhost]

TASK [common : KubeSphere | Creating elasticsearch credentials secret] ***** skipping: [localhost]

TASK [common : KubeSphere | Checking internal es] ** skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging] *** skipping: [localhost]

TASK [common : KubeSphere | Getting PersistentVolume Name] ***** skipping: [localhost]

TASK [common : KubeSphere | Patching PersistentVolume (persistentVolumeReclaimPolicy)] *** skipping: [localhost]

TASK [common : KubeSphere | Deleting elasticsearch] **** skipping: [localhost]

TASK [common : KubeSphere | Waiting for seconds] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging] *** skipping: [localhost]

TASK [common : KubeSphere | Importing es status] *** skipping: [localhost]

TASK [common : KubeSphere | Deploying elasticsearch-logging-curator] *** skipping: [localhost]

TASK [common : KubeSphere | Getting fluentbit installation files] ** skipping: [localhost]

TASK [common : ks-logging | Getting Kubernetes Node info] ** skipping: [localhost]

TASK [common : ks-logging | Setting container runtime of kubernetes] *** skipping: [localhost]

TASK [common : ks-logging | Setting container runtime of kubernetes] *** skipping: [localhost]

TASK [common : ks-logging | Debug container_runtime] *** skipping: [localhost]

TASK [common : ks-logging | Debug logging_container_runtime] *** skipping: [localhost]

TASK [common : KubeSphere | Creating custom manifests] ***** skipping: [localhost] => (item={'path': 'fluentbit', 'file': 'custom-fluentbit-fluentBit.yaml'}) skipping: [localhost] => (item={'path': 'init', 'file': 'custom-fluentbit-operator-deployment.yaml'})

TASK [common : KubeSphere | Preparing fluentbit operator setup] **** skipping: [localhost]

TASK [common : KubeSphere | Deploying new fluentbit operator] ** skipping: [localhost]

TASK [common : KubeSphere | Importing fluentbit status] **** skipping: [localhost]

TASK [common : Setting persistentVolumeReclaimPolicy (mysql)] ** skipping: [localhost]

TASK [common : Setting persistentVolumeReclaimPolicy (etcd)] *** skipping: [localhost]

PLAY RECAP ***** localhost : ok=28 changed=22 unreachable=0 failed=0 skipped=111 rescued=0 ignored=0 [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [localhost] ***

TASK [download : Generating images list] *** skipping: [localhost]

TASK [download : Synchronizing images] *****

TASK [kubesphere-defaults : KubeSphere | Setting images' namespace override] *** ok: [localhost]

TASK [kubesphere-defaults : KubeSphere | Configuring defaults] ***** ok: [localhost] => { "msg": "Check roles/kubesphere-defaults/defaults/main.yml" }

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere directory] ***** ok: [localhost]

TASK [ks-core/init-token : KubeSphere | Getting installation init files] *** changed: [localhost] => (item=jwt-script)

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] **** changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] **** ok: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating KubeSphere Secret] **** skipping: [localhost]

TASK [ks-core/init-token : KubeSphere | Enabling Token Script] ***** changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Getting KubeSphere Token] ** changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Checking KubeSphere secrets] *** changed: [localhost]

TASK [ks-core/init-token : KubeSphere | Deleting KubeSphere secret] **** skipping: [localhost]

TASK [ks-core/init-token : KubeSphere | Creating components token] ***** changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Setting Kubernetes version] *** ok: [localhost]

TASK [ks-core/ks-core : KubeSphere | Getting Kubernetes master num] **** changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Setting master num] *** ok: [localhost]

TASK [ks-core/ks-core : KubeSphere | Override master num] ** skipping: [localhost]

TASK [ks-core/ks-core : KubeSphere | Setting enableHA] ***** ok: [localhost]

TASK [ks-core/ks-core : KubeSphere | Checking ks-core Helm Release] **** changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Checking ks-core Exsit] *** changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Convert ks-core to helm mananged] ***** skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'serviceaccounts', 'resource': 'kubesphere-cluster-admin', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'serviceaccounts', 'resource': 'kubesphere-router-serviceaccount', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'role', 'resource': 'system:kubesphere-router-role', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'rolebinding', 'resource': 'nginx-ingress-role-nisa-binding', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'deployment', 'resource': 'default-http-backend', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-controls-system', 'kind': 'service', 'resource': 'default-http-backend', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'secrets', 'resource': 'ks-controller-manager-webhook-cert', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'serviceaccounts', 'resource': 'kubesphere', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'configmaps', 'resource': 'ks-console-config', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'configmaps', 'resource': 'ks-router-config', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'configmaps', 'resource': 'sample-bookinfo', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'clusterroles', 'resource': 'system:kubesphere-router-clusterrole', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'clusterrolebindings', 'resource': 'system:nginx-ingress-clusterrole-nisa-binding', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'clusterrolebindings', 'resource': 'system:kubesphere-cluster-admin', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'clusterrolebindings', 'resource': 'kubesphere', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'services', 'resource': 'ks-apiserver', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'services', 'resource': 'ks-console', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'services', 'resource': 'ks-controller-manager', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'deployments', 'resource': 'ks-apiserver', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'deployments', 'resource': 'ks-console', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'deployments', 'resource': 'ks-controller-manager', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'validatingwebhookconfigurations', 'resource': 'users.iam.kubesphere.io', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'validatingwebhookconfigurations', 'resource': 'resourcesquotas.quota.kubesphere.io', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'validatingwebhookconfigurations', 'resource': 'network.kubesphere.io', 'release': 'ks-core'}) skipping: [localhost] => (item={'ns': 'kubesphere-system', 'kind': 'users.iam.kubesphere.io', 'resource': 'admin', 'release': 'ks-core'})

TASK [ks-core/ks-core : KubeSphere | Patch admin user] ***** skipping: [localhost]

TASK [ks-core/ks-core : KubeSphere | Getting ks-core helm charts] ** changed: [localhost] => (item=ks-core)

TASK [ks-core/ks-core : KubeSphere | Creating manifests] *** changed: [localhost] => (item={'path': 'ks-core', 'file': 'custom-values-ks-core.yaml'})

TASK [ks-core/ks-core : KubeSphere | Upgrade CRDs] ***** changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/app_v1beta1_application.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmapplications.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmapplicationversions.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmcategories.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmreleases.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/application.kubesphere.io_helmrepos.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/cluster.kubesphere.io_clusters.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/gateway.kubesphere.io_gateways.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/gateway.kubesphere.io_nginxes.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedrolebindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_federatedusers.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_globalrolebindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_globalroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_groupbindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_groups.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_loginrecords.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_rolebases.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_users.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_workspacerolebindings.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/iam.kubesphere.io_workspaceroles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ipamblocks.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ipamhandles.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_ippools.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/network.kubesphere.io_namespacenetworkpolicies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/quota.kubesphere.io_resourcequotas.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/servicemesh.kubesphere.io_servicepolicies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/servicemesh.kubesphere.io_strategies.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/storage.kubesphere.io_storageclasseraccessor.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/tenant.kubesphere.io_workspaces.yaml) changed: [localhost] => (item=/kubesphere/kubesphere/ks-core/crds/tenant.kubesphere.io_workspacetemplates.yaml)

TASK [ks-core/ks-core : KubeSphere | Creating ks-core] ***** changed: [localhost]

TASK [ks-core/ks-core : KubeSphere | Importing ks-core status] ***** changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (1)] ***** changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (2)] ***** changed: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (3)] ***** skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Checking core components (4)] ***** skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Updating ks-core status] ** skipping: [localhost]

TASK [ks-core/prepare : set_fact] ** skipping: [localhost]

TASK [ks-core/prepare : KubeSphere | Creating KubeSphere directory] **** ok: [localhost]

TASK [ks-core/prepare : KubeSphere | Getting installation init files] ** changed: [localhost] => (item=ks-init)

TASK [ks-core/prepare : KubeSphere | Initing KubeSphere] *** changed: [localhost] => (item=role-templates.yaml)

TASK [ks-core/prepare : KubeSphere | Generating kubeconfig-admin] ** skipping: [localhost]

PLAY RECAP ***** localhost : ok=26 changed=18 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 Start installing monitoring Start installing multicluster Start installing openpitrix Start installing network

Waiting for all tasks to be completed ... task network status is successful (1/4) task openpitrix status is successful (2/4) task multicluster status is successful (3/4) task monitoring status is failed (4/4)

Collecting installation results ...

Task 'monitoring' failed:

{ "counter": 118, "created": "2023-07-06T09:16:48.018096", "end_line": 113, "event": "runner_on_failed", "event_data": { "duration": 35.026231, "end": "2023-07-06T09:16:48.017999", "event_loop": null, "host": "localhost", "ignore_errors": null, "play": "localhost", "play_pattern": "localhost", "play_uuid": "56a3424d-9bcb-6397-f6c1-000000000005", "playbook": "/kubesphere/playbooks/monitoring.yaml", "playbook_uuid": "afdb848e-27ba-41b8-b537-c1ff066d09c6", "remote_addr": "127.0.0.1", "res": { "changed": true, "msg": "All items completed", "results": [ { "_ansible_item_label": "prometheus", "_ansible_no_log": false, "ansible_loop_var": "item", "attempts": 5, "changed": true, "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus", "delta": "0:00:00.237101", "end": "2023-07-06 17:16:30.545687", "failed": true, "failed_when_result": true, "invocation": { "module_args": { "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "item": "prometheus", "msg": "non-zero return code", "rc": 1, "start": "2023-07-06 17:16:30.308586", "stderr": "error: unable to recognize "/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"", "stderr_lines": [ "error: unable to recognize "/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"" ], "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-k8s unchanged", "stdout_lines": [ "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", "prometheus.monitoring.coreos.com/k8s unchanged", "prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged", "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", "service/prometheus-k8s unchanged", "serviceaccount/prometheus-k8s unchanged", "servicemonitor.monitoring.coreos.com/prometheus-k8s unchanged" ] }, { "_ansible_item_label": "prometheus", "_ansible_no_log": false, "ansible_loop_var": "item", "attempts": 5, "changed": true, "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus", "delta": "0:00:00.217146", "end": "2023-07-06 17:16:47.995799", "failed": true, "failed_when_result": true, "invocation": { "module_args": { "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true } }, "item": "prometheus", "msg": "non-zero return code", "rc": 1, "start": "2023-07-06 17:16:47.778653", "stderr": "error: unable to recognize "/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"", "stderr_lines": [ "error: unable to recognize "/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"" ], "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-k8s unchanged", "stdout_lines": [ "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged", "prometheus.monitoring.coreos.com/k8s unchanged", "prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged", "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged", "service/prometheus-k8s unchanged", "serviceaccount/prometheus-k8s unchanged", "servicemonitor.monitoring.coreos.com/prometheus-k8s unchanged" ] } ] }, "resolved_action": "shell", "role": "ks-monitor", "start": "2023-07-06T09:16:12.991768", "task": "Monitoring | Installing Prometheus", "task_action": "shell", "task_args": "", "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/prometheus.yaml:2", "task_uuid": "56a3424d-9bcb-6397-f6c1-000000000042", "uuid": "1634900a-1164-4f2e-99cf-ffce00fc072f" }, "parent_uuid": "56a3424d-9bcb-6397-f6c1-000000000042", "pid": 5247, "runner_ident": "monitoring", "start_line": 113, "stdout": "", "uuid": "1634900a-1164-4f2e-99cf-ffce00fc072f" }

Failed to ansible-playbook result-info.yaml

image image image [root@ data]# telnet 10.233.53.61 80 Trying 10.233.53.61... telnet: connect to address 10.233.53.61: Connection refused ks-apiserver这个80端口不通 请大佬指教

redscholar commented 1 year ago

Installing versions prior to KubeSphere v3.3.2 does indeed cause this issue in versions after kubernetes v1.25. However, it has been fixed in KubeSphere v3.4.0-rc.0. You can install KubeSphere v3.4.0-rc.0 using kubekey v3.0.8-rc.0.

refer https://github.com/kubesphere/kubesphere/issues/5734

fenlin88l commented 1 year ago

安装 KubeSphere v3.3.2 之前的版本确实会在 KubeSphere v1.25 之后的版本中导致此问题。不过,它已在 KubeSphere v3.4.0-rc.0 中修复。您可以使用 kubekey v3.0.8-rc.0 安装 KubeSphere v3.4.0-rc.0。

参考kubesphere/kubesphere#5734

这些镜像对应的版本都有吗

fenlin88l commented 1 year ago

请问您有kubekey v3.0.8-rc.0和KubeSphere v3.4.0-rc.0版本的链接没

redscholar commented 1 year ago

Please wait a moment.kubekey v3.0.8-rc.0 is currently being released.

fenlin88l commented 1 year ago

请稍等,kubekey v3.0.8-rc.0 目前正在发布。

谢谢大佬 待发布后 麻烦给个链接 那kubekey v3.0.7与kubesphere v3.3.2是否有这问题 目前 kubesphere最新版本是多少

redscholar commented 1 year ago

there is kubekey v3.0.8-rc.0: https://github.com/kubesphere/kubekey/releases/tag/v3.0.8-rc.0 you can run ./kk create config --with-kubernetes v1.25.3 --with-kubesphere v3.4.0-rc.0 to install.

fenlin88l commented 1 year ago

Installing versions prior to KubeSphere v3.3.2 does indeed cause this issue in versions after kubernetes v1.25. However, it has been fixed in KubeSphere v3.4.0-rc.0. You can install KubeSphere v3.4.0-rc.0 using kubekey v3.0.8-rc.0.

refer kubesphere/kubesphere#5734

kk 3.0.2 安装kubesphere 3.3.0 失败 有解决的方法没 在不更换其他版本的情况下 如果k8s不用1.25.3 是否可成功

redscholar commented 1 year ago

Sure. You can use a lower version of Kubernetes(lower than 1.25) for the installation.