kubesphere / kubekey

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA 🔥 ⎈ 🐳
https://kubesphere.io
Apache License 2.0
2.22k stars 524 forks source link

使用KK安装时,task monitoring status is failed #1816

Open strike94 opened 1 year ago

strike94 commented 1 year ago

What is version of KubeKey has the issue?

kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.7", GitCommit:"e755baf67198d565689d7207378174f429b508ba", GitTreeState:"clean", BuildDate:"2023-01-18T01:57:24Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

What is your os environment?

ubuntu 22.04

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 192.168.49.131, internalAddress: 192.168.49.131, user: root, password: "123456"}
  - {name: node, address: 192.168.49.132, internalAddress: 192.168.49.132, user: root, password: "123456"}
  roleGroups:
    etcd:
    - master
    control-plane: 
    - master
    worker:
    - master
    - node
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.25.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []

---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

A clear and concise description of what happend.

通过kk安装集群时,在task monitoring status 时失败

Relevant log output

Waiting for all tasks to be completed ...
task network status is successful  (1/4)
task openpitrix status is successful  (2/4)
task multicluster status is successful  (3/4)
task monitoring status is failed  (4/4)
**************************************************
Collecting installation results ...

Task 'monitoring' failed:
******************************************************************************************************************************************************
{
  "counter": 117,
  "created": "2023-04-14T07:40:11.830312",
  "end_line": 112,
  "event": "runner_on_failed",
  "event_data": {
    "duration": 39.869701,
    "end": "2023-04-14T07:40:11.830186",
    "event_loop": null,
    "host": "localhost",
    "ignore_errors": null,
    "play": "localhost",
    "play_pattern": "localhost",
    "play_uuid": "22c7f0b9-c201-007b-9327-000000000005",
    "playbook": "/kubesphere/playbooks/monitoring.yaml",
    "playbook_uuid": "9edb282c-cc34-4979-b322-1c5787b9c986",
    "remote_addr": "127.0.0.1",
    "res": {
      "changed": true,
      "msg": "All items completed",
      "results": [
        {
          "_ansible_item_label": "prometheus",
          "_ansible_no_log": false,
          "ansible_loop_var": "item",
          "attempts": 5,
          "changed": true,
          "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
          "delta": "0:00:00.471616",
          "end": "2023-04-14 15:39:53.211456",
          "failed": true,
          "failed_when_result": true,
          "invocation": {
            "module_args": {
              "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
              "_uses_shell": true,
              "argv": null,
              "chdir": null,
              "creates": null,
              "executable": null,
              "removes": null,
              "stdin": null,
              "stdin_add_newline": true,
              "strip_empty_ends": true,
              "warn": true
            }
          },
          "item": "prometheus",
          "msg": "non-zero return code",
          "rc": 1,
          "start": "2023-04-14 15:39:52.739840",
          "stderr": "error: unable to recognize \"/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml\": no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"",
          "stderr_lines": [
            "error: unable to recognize \"/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml\": no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\""
          ],
          "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-k8s unchanged",
          "stdout_lines": [
            "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
            "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
            "prometheus.monitoring.coreos.com/k8s unchanged",
            "prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged",
            "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
            "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
            "service/prometheus-k8s unchanged",
            "serviceaccount/prometheus-k8s unchanged",
            "servicemonitor.monitoring.coreos.com/prometheus-k8s unchanged"
          ]
        },
        {
          "_ansible_item_label": "prometheus",
          "_ansible_no_log": false,
          "ansible_loop_var": "item",
          "attempts": 5,
          "changed": true,
          "cmd": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
          "delta": "0:00:00.339981",
          "end": "2023-04-14 15:40:11.806929",
          "failed": true,
          "failed_when_result": true,
          "invocation": {
            "module_args": {
              "_raw_params": "/usr/local/bin/kubectl apply -f /kubesphere/kubesphere/prometheus/prometheus",
              "_uses_shell": true,
              "argv": null,
              "chdir": null,
              "creates": null,
              "executable": null,
              "removes": null,
              "stdin": null,
              "stdin_add_newline": true,
              "strip_empty_ends": true,
              "warn": true
            }
          },
          "item": "prometheus",
          "msg": "non-zero return code",
          "rc": 1,
          "start": "2023-04-14 15:40:11.466948",
          "stderr": "error: unable to recognize \"/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml\": no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\"",
          "stderr_lines": [
            "error: unable to recognize \"/kubesphere/kubesphere/prometheus/prometheus/prometheus-podDisruptionBudget.yaml\": no matches for kind \"PodDisruptionBudget\" in version \"policy/v1beta1\""
          ],
          "stdout": "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nclusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged\nprometheus.monitoring.coreos.com/k8s unchanged\nprometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged\nrolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nrole.rbac.authorization.k8s.io/prometheus-k8s-config unchanged\nservice/prometheus-k8s unchanged\nserviceaccount/prometheus-k8s unchanged\nservicemonitor.monitoring.coreos.com/prometheus-k8s unchanged",
          "stdout_lines": [
            "clusterrole.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
            "clusterrolebinding.rbac.authorization.k8s.io/kubesphere-prometheus-k8s unchanged",
            "prometheus.monitoring.coreos.com/k8s unchanged",
            "prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules unchanged",
            "rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
            "role.rbac.authorization.k8s.io/prometheus-k8s-config unchanged",
            "service/prometheus-k8s unchanged",
            "serviceaccount/prometheus-k8s unchanged",
            "servicemonitor.monitoring.coreos.com/prometheus-k8s unchanged"
          ]
        }
      ]
    },
    "resolved_action": "shell",
    "role": "ks-monitor",
    "start": "2023-04-14T07:39:31.960485",
    "task": "Monitoring | Installing Prometheus",
    "task_action": "shell",
    "task_args": "",
    "task_path": "/kubesphere/installer/roles/ks-monitor/tasks/prometheus.yaml:2",
    "task_uuid": "22c7f0b9-c201-007b-9327-000000000042",
    "uuid": "8f560e81-74d4-4923-b344-3c22333720ea"
  },
  "parent_uuid": "22c7f0b9-c201-007b-9327-000000000042",
  "pid": 23964,
  "runner_ident": "monitoring",
  "start_line": 112,
  "stdout": "",
  "uuid": "8f560e81-74d4-4923-b344-3c22333720ea"
}

Additional information

No response

ImitationImmortal commented 1 year ago

reference https://github.com/kubesphere/kubesphere/issues/5734